[jira] [Comment Edited] (SOLR-6600) configurable relevance impact of phrases for edismax

2016-02-10 Thread Le Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142363#comment-15142363
 ] 

Le Zhao edited comment on SOLR-6600 at 2/11/16 7:47 AM:


Am I missing something here?
This issue is the exact opposite (or revert) of SOLR-6062, not a duplicate?

The SOLR-6062 behavior (summing from all fields instead of max or tie break) is 
not very desirable because phrase scores can increase disproportionately to 
unigram scores (which are limited by max or tie break), making it very hard to 
control/limit the influence of these phrases.  Spurious bigram matches can 
easily bring false positives to the top of the rank.



was (Author: lezhao):
Am I missing something here?
This issue is the exact opposite (or revert) of SOLR-6062, not a duplicate?

The SOLR-6062 behavior (summing from all fields instead of max or tie break) is 
not very desirable because phrase scores can increase disproportionately to 
unigram weights (being controlled by max or tie break), making it very hard to 
control/limit the influence of these phrases.  Spurious bigram matches can 
easily bring false positives to the top of the rank.


> configurable relevance impact of phrases for edismax
> 
>
> Key: SOLR-6600
> URL: https://issues.apache.org/jira/browse/SOLR-6600
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.9
>Reporter: Alexey Kozhemiakin
>  Labels: edismax
>
> Currently solr has a tie breaker parameter which control how to aggregate 
> relevance score for search hits.
> But score for fields (pf, pf2, pf3) are always summed up. 
> The goal of the patch is to wrap phrase clauses into single dismax clause 
> instead of multipe ones
> Before patch
> +(
>  DisjunctionMaxQuery((Body:james | Title:james)~tie_breaker)
> DisjunctionMaxQuery((Body:kirk | Title:kirk)~tie_breaker))
> )
> DisjunctionMaxQuery((Body:"james kirk")~tie_breaker)
> DisjunctionMaxQuery((Title:"james kirk")~tie_breaker)
> after patch
> +(
>  DisjunctionMaxQuery((Body:james | Title:james)~tie_breaker)
> DisjunctionMaxQuery((Body:kirk | Title:kirk)~tie_breaker))
>   )
> DisjunctionMaxQuery((Body:"james kirk" | Title:"james kirk") ~tie_breaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6600) configurable relevance impact of phrases for edismax

2016-02-10 Thread Le Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142363#comment-15142363
 ] 

Le Zhao commented on SOLR-6600:
---

Am I missing something here?
This issue is the exact opposite (or revert) of SOLR-6062, not a duplicate?

The SOLR-6062 behavior (summing from all fields instead of max or tie break) is 
not very desirable because phrase scores can increase disproportionately to 
unigram weights (being controlled by max or tie break), making it very hard to 
control/limit the influence of these phrases.  Spurious bigram matches can 
easily bring false positives to the top of the rank.


> configurable relevance impact of phrases for edismax
> 
>
> Key: SOLR-6600
> URL: https://issues.apache.org/jira/browse/SOLR-6600
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.9
>Reporter: Alexey Kozhemiakin
>  Labels: edismax
>
> Currently solr has a tie breaker parameter which control how to aggregate 
> relevance score for search hits.
> But score for fields (pf, pf2, pf3) are always summed up. 
> The goal of the patch is to wrap phrase clauses into single dismax clause 
> instead of multipe ones
> Before patch
> +(
>  DisjunctionMaxQuery((Body:james | Title:james)~tie_breaker)
> DisjunctionMaxQuery((Body:kirk | Title:kirk)~tie_breaker))
> )
> DisjunctionMaxQuery((Body:"james kirk")~tie_breaker)
> DisjunctionMaxQuery((Title:"james kirk")~tie_breaker)
> after patch
> +(
>  DisjunctionMaxQuery((Body:james | Title:james)~tie_breaker)
> DisjunctionMaxQuery((Body:kirk | Title:kirk)~tie_breaker))
>   )
> DisjunctionMaxQuery((Body:"james kirk" | Title:"james kirk") ~tie_breaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+104) - Build # 15848 - Still Failing!

2016-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15848/
Java: 32bit/jdk-9-ea+104 -client -XX:+UseParallelGC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9750, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=9754, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)3) Thread[id=9753, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)4) Thread[id=9751, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)5) Thread[id=9752, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=9750, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=9754, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.m

[jira] [Commented] (SOLR-8670) Upgrade from Solr version 5.3.2 to 5.4.1 failed

2016-02-10 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142302#comment-15142302
 ] 

Ishan Chattopadhyaya commented on SOLR-8670:


[~viveknarang], as we discussed offline, can we really quickly make our tests 
work with trunk/branch_5x+patch, so that we can verify if a patch here actually 
fixes the upgrade problem?

> Upgrade from Solr version 5.3.2 to 5.4.1 failed
> ---
>
> Key: SOLR-8670
> URL: https://issues.apache.org/jira/browse/SOLR-8670
> Project: Solr
>  Issue Type: Bug
>Reporter: Vivek Narang
>
> Upgrade from 5.3.2 to 5.4.1 failed
> Upgrade test conducted with a help of program. Please find more details at 
> [https://github.com/viveknarang/solr-upgrade-tests]
> Please find logs for this test at: [http://106.186.125.89/log.tar.gz]
> A significant section of the log for quick reference below ...
> .. WARN  (main) [   ] o.e.j.u.c.AbstractLifeCycle FAILED 
> Zookeeper@d6da972c==org.apache.solr.servlet.ZookeeperInfoServlet,-1,false: 
> javax.servlet.UnavailableException: 
> org.apache.solr.servlet.ZookeeperInfoServlet
> javax.servlet.UnavailableException: 
> org.apache.solr.servlet.ZookeeperInfoServlet
> at org.eclipse.jetty.servlet.BaseHolder.doStart(BaseHolder.java:102)
> at 
> org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:338) 
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8670) Upgrade from Solr version 5.3.2 to 5.4.1 failed

2016-02-10 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142300#comment-15142300
 ] 

Ishan Chattopadhyaya commented on SOLR-8670:


This is the relevant section. Seems like ZookeeperInfoServlet has been removed 
as per SOLR-8083, which is causing this following ClassNotFoundException.

{code}

java.lang.ClassNotFoundException: org.apache.solr.servlet.ZookeeperInfoServlet
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 
org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:450)
at 
org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:403)
at org.eclipse.jetty.util.Loader.loadClass(Loader.java:86)
at org.eclipse.jetty.servlet.BaseHolder.doStart(BaseHolder.java:95)
at 
org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:338)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:870)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:186)
at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:498)
at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:146)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:180)
at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:461)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:64)
at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:609)
at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:528)
at org.eclipse.jetty.util.Scanner.scan(Scanner.java:391)
at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:313)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:150)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:560)
at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:235)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1255)
at java.security.AccessController.doPrivileged(Native Method)
at 
org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.eclipse.jetty.start.Main.invokeMain(Main.java:321)
at org.eclipse.jetty.start.Main.start(Main.java:817)
at org.eclipse.jetty.start.Main.main(Main.java:112)
2016-02-10 15:19:41.680 INFO  
(coreLoadExecutor-6-thread-2-processing-n:106.186.125.89:52339_solr) 
[c:bce61606

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_72) - Build # 15847 - Still Failing!

2016-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15847/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=8481, 
name=zkCallback-1204-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=8198, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[A1FBF926C061066E]-SendThread(127.0.0.1:57498),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
3) Thread[id=8482, name=zkCallback-1204-thread-4, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=8199, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[A1FBF926C061066E]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
5) Thread[id=8200, name=zkCallback-1204-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=8470, 
name=zkCallback-1204-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 6 threads leaked from SUITE 
sc

[jira] [Comment Edited] (SOLR-8586) Implement hash over all documents to check for shard synchronization

2016-02-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142215#comment-15142215
 ] 

Yonik Seeley edited comment on SOLR-8586 at 2/11/16 4:22 AM:
-

bq. Yep, I've been looping a custom version of the HDFS-nothing-safe test that 
among other things, only does adds, no deletes.

Update: when I reverted my custom changes to the chaos test (so that it also 
did deletes), I got a high amount of shard-out-of-sync errors... seemingly even 
more than before, so I've been trying to track those down.  What I saw were 
issues that did not look related to PeerSync... I saw missing documents from a 
shard that replicated from the leader while buffering documents, and I saw the 
missing documents come in and get buffered, pointing to transaction log 
buffering or replay issues.

Then I realized that I had tested "adds only" before committing, and tested the 
normal test after committing and doing a "git pull".  In-between those times 
was SOLR-8575, which was a fix to the HDFS tlog!  I've been looping the test 
for a number of hours with those changes reverted, and I haven't seen a 
shards-out-of-sync fail so far.  I've also done a quick review of SOLR-8575, 
but didn't see anything obviously incorrect.  The changes in that issue may 
just be uncovering another bug (due to timing) rather than causing one... too 
early to tell.

I've also been running the non-hdfs version of the test for over a day, and 
also had no inconsistent shard failures.


was (Author: ysee...@gmail.com):
bq. Yep, I've been looping a custom version of the HDFS-nothing-safe test that 
among other things, only does adds, no deletes.

Update: when I reverted my custom changes to the chaos test (so that it also 
did deletes), I got a high amount of shard-out-of-sync errors... seemingly even 
more than before, so I've been trying to track those down.  What I saw were 
issues that did not look related to PeerSync... I saw missing documents from a 
shard that replicated from the leader while buffering documents, and I saw the 
missing documents come in and get buffered, pointing to transaction log 
buffering or replay issues.

Then I realized that I had tested "adds only" before committing, and tested the 
normal test after committing and doing a "git pull".  In-between those times 
was SOLR-8575, which was a fix to the HDFS tlog!  I've been looping the test 
for a number of hours with those changes reverted, and I haven't seen a 
shards-out-of-sync fail so far.  I've also done a quick review of SOLR-8575, 
but didn't see anything obviously incorrect.

I've also been running the non-hdfs version of the test for over a day, and 
also had no inconsistent shard failures.

> Implement hash over all documents to check for shard synchronization
> 
>
> Key: SOLR-8586
> URL: https://issues.apache.org/jira/browse/SOLR-8586
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Yonik Seeley
> Fix For: 5.5, master
>
> Attachments: SOLR-8586.patch, SOLR-8586.patch, SOLR-8586.patch, 
> SOLR-8586.patch
>
>
> An order-independent hash across all of the versions in the index should 
> suffice.  The hash itself is pretty easy, but we need to figure out 
> when/where to do this check (for example, I think PeerSync is currently used 
> in multiple contexts and this check would perhaps not be appropriate for all 
> PeerSync calls?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8575) Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.

2016-02-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142219#comment-15142219
 ] 

Yonik Seeley commented on SOLR-8575:


I was going to reopen this issue, but it's still open anyway.
I've changed to a blocker for 5.5 based on what I'm seeing in here:
https://issues.apache.org/jira/browse/SOLR-8586?focusedCommentId=15142215&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15142215


> Fix HDFSLogReader replay status numbers and a performance bug where we can 
> reopen FSDataInputStream too often.
> --
>
> Key: SOLR-8575
> URL: https://issues.apache.org/jira/browse/SOLR-8575
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 5.5
>
> Attachments: SOLR-8575.patch
>
>
> [~pdvo...@cloudera.com] noticed some funny transaction log replay status 
> logging a while back:
> active=true starting pos=444978 current pos=2855956 current size=16262 % 
> read=17562
> active=true starting pos=444978 current pos=5748869 current size=16262 % 
> read=35352
> 17562% read? Current size does not change as expected in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8575) Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.

2016-02-10 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8575:
---
Fix Version/s: 5.5

> Fix HDFSLogReader replay status numbers and a performance bug where we can 
> reopen FSDataInputStream too often.
> --
>
> Key: SOLR-8575
> URL: https://issues.apache.org/jira/browse/SOLR-8575
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 5.5
>
> Attachments: SOLR-8575.patch
>
>
> [~pdvo...@cloudera.com] noticed some funny transaction log replay status 
> logging a while back:
> active=true starting pos=444978 current pos=2855956 current size=16262 % 
> read=17562
> active=true starting pos=444978 current pos=5748869 current size=16262 % 
> read=35352
> 17562% read? Current size does not change as expected in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8575) Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.

2016-02-10 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8575:
---
Priority: Blocker  (was: Major)

> Fix HDFSLogReader replay status numbers and a performance bug where we can 
> reopen FSDataInputStream too often.
> --
>
> Key: SOLR-8575
> URL: https://issues.apache.org/jira/browse/SOLR-8575
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 5.5
>
> Attachments: SOLR-8575.patch
>
>
> [~pdvo...@cloudera.com] noticed some funny transaction log replay status 
> logging a while back:
> active=true starting pos=444978 current pos=2855956 current size=16262 % 
> read=17562
> active=true starting pos=444978 current pos=5748869 current size=16262 % 
> read=35352
> 17562% read? Current size does not change as expected in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8586) Implement hash over all documents to check for shard synchronization

2016-02-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142215#comment-15142215
 ] 

Yonik Seeley commented on SOLR-8586:


bq. Yep, I've been looping a custom version of the HDFS-nothing-safe test that 
among other things, only does adds, no deletes.

Update: when I reverted my custom changes to the chaos test (so that it also 
did deletes), I got a high amount of shard-out-of-sync errors... seemingly even 
more than before, so I've been trying to track those down.  What I saw were 
issues that did not look related to PeerSync... I saw missing documents from a 
shard that replicated from the leader while buffering documents, and I saw the 
missing documents come in and get buffered, pointing to transaction log 
buffering or replay issues.

Then I realized that I had tested "adds only" before committing, and tested the 
normal test after committing and doing a "git pull".  In-between those times 
was SOLR-8575, which was a fix to the HDFS tlog!  I've been looping the test 
for a number of hours with those changes reverted, and I haven't seen a 
shards-out-of-sync fail so far.  I've also done a quick review of SOLR-8575, 
but didn't see anything obviously incorrect.

I've also been running the non-hdfs version of the test for over a day, and 
also had no inconsistent shard failures.

> Implement hash over all documents to check for shard synchronization
> 
>
> Key: SOLR-8586
> URL: https://issues.apache.org/jira/browse/SOLR-8586
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Yonik Seeley
> Fix For: 5.5, master
>
> Attachments: SOLR-8586.patch, SOLR-8586.patch, SOLR-8586.patch, 
> SOLR-8586.patch
>
>
> An order-independent hash across all of the versions in the index should 
> suffice.  The hash itself is pretty easy, but we need to figure out 
> when/where to do this check (for example, I think PeerSync is currently used 
> in multiple contexts and this check would perhaps not be appropriate for all 
> PeerSync calls?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8104) config API does not work for spellchecker

2016-02-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-8104.
-
   Resolution: Fixed
Fix Version/s: (was: 5.5)
   5.4

Marking it resolved for Solr 5.3 based on the CHANGES file.

> config API does not work for spellchecker
> -
>
> Key: SOLR-8104
> URL: https://issues.apache.org/jira/browse/SOLR-8104
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: master, 5.4
>
> Attachments: SOLR-8104.patch
>
>
> A command as follows fails
> {code}
> curl http://localhost:8983/solr/gettingstarted/config -H 
> 'Content-type:application/json'  -d '
> {
> "add-searchcomponent": {
> "name": "myspellcheck",
> "class": "solr.SpellCheckComponent",
> "queryAnalyzerFieldType": "text_general",
> "spellchecker": {
> "name": "default",
> "field": "_text_",
> "class": "solr.DirectSolrSpellChecker"
> }
> }
> }'
> {code}
> and there is no possible alternative
> The reason is {{SeachComponent}} expects a {{NamedList}} with the name 
> "spellchecker" . But json does not support NamedList



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_72) - Build # 15846 - Failure!

2016-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15846/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([A0C1A60753A8D026:57B2485F95407FC0]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1241)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11323 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler

[jira] [Commented] (SOLR-8349) Allow sharing of large in memory data structures across cores

2016-02-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142190#comment-15142190
 ] 

David Smiley commented on SOLR-8349:


I put a little more thought into this issue now.  The cache at the 
CoreContainer makes sense but I'm not convinced that the Lucene layer needs a 
new abstraction.  Instead, I think Solr could be enhanced to load some analysis 
components into this CoreContainer cache.  The question is which ones?  Not 
all... some components will be refer to resources that are local to a SolrCore. 
 I'm not sure how easy it would be for Solr to detect that automatically; 
probably not easy.  That leaves the possibility of a new attribute on the core 
to designate it as globally shared.  What do you think?

> Allow sharing of large in memory data structures across cores
> -
>
> Key: SOLR-8349
> URL: https://issues.apache.org/jira/browse/SOLR-8349
> Project: Solr
>  Issue Type: Improvement
>  Components: Server
>Affects Versions: 5.3
>Reporter: Gus Heck
> Attachments: SOLR-8349.patch
>
>
> In some cases search components or analysis classes may utilize a large 
> dictionary or other in-memory structure. When multiple cores are loaded with 
> identical configurations utilizing this large in memory structure, each core 
> holds it's own copy in memory. This has been noted in the past and a specific 
> case reported in SOLR-3443. This patch provides a generalized capability, and 
> if accepted, this capability will then be used to fix SOLR-3443.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #1176: POMs out of sync

2016-02-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1176/

No tests ran.

Build Log:
[...truncated 42822 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 691 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:773: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 18 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr

2016-02-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142109#comment-15142109
 ] 

Shai Erera commented on SOLR-5730:
--

Few comments about the patch:

* In QueryComponent: {{if\(existingSegmentTerminatedEarly == null\)}} -- can 
you add a space after the 'if'?

* {{SortingMergePolicyFactory.getMergePolicy()}} calls 
{{args.invokeSetters(mp);}}, like {{UpgradeIndexMergePolicyFactory}}. I wonder 
if we can have a protected abstract {{getMergePolicyInstance(wrappedMP)}}, so 
that {{WrapperMergePolicyFactory.getMergePolicy()}} implements it by calling 
this method followed by {{args.invokeSetters(mp);}}. What do you think?

* {{SolrIndexSearcher}}:  
{{qr.setSegmentTerminatedEarly\(earlyTerminatingSortingCollector.terminatedEarly\(\)\);}}
 -- should we also set {{qr.partialResults}}?

* {{DefaultSolrCoreState}}: you can change the to:

{code}
public Sort getMergePolicySort() throws IOException {
  lock(iwLock.readLock());
  try {
if (indexWriter != null) {
  final MergePolicy mergePolicy = indexWriter.getConfig().getMergePolicy();
  if (mergePolicy instanceof SortingMergePolicy) {
return ((SortingMergePolicy) mergePolicy).getSort();
  }
}
  } finally {
iwLock.readLock().unlock();
  }
}
{code}

* What's the purpose of 
{{enable="$\{solr.sortingMergePolicyFactory.enable:true\}"}}?

* I kind of feel like the test you added to {{TestMiniSolrCloudCluster}} 
doesn't belong in that class. Perhaps it should be in its own test class, 
inheriting from this class, or just using {{MiniSolrCloudCluster}}?

* {{RandomForceMergePolicyFactory}} is not really related to this issue. 
Perhaps you should commit it separately?

> make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector 
> configurable in Solr
> --
>
> Key: SOLR-5730
> URL: https://issues.apache.org/jira/browse/SOLR-5730
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>  Labels: blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-5730-part1and2.patch, SOLR-5730-part1of2.patch, 
> SOLR-5730-part1of2.patch, SOLR-5730-part2of2.patch, SOLR-5730-part2of2.patch
>
>
> *Example configuration (solrconfig.xml) :*
> {noformat}
> -
> +
> +  in
> +  org.apache.solr.index.TieredMergePolicyFactory
> +  timestamp desc
> +
> {noformat}
> *Example use (EarlyTerminatingSortingCollector):*
> {noformat}
> &sort=timestamp+desc&segmentTerminateEarly=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.5.0 release branch cut

2016-02-10 Thread Varun Thacker
Hi Mike,

Can I backport an SolrJ API fix as part of SOLR-8534 . The original commit
is already in the branch but there was a API issue with that commit . I've
just committed the fix on master and branch_5x .

On Wed, Feb 10, 2016 at 2:47 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> I created place-holder release notes:
>
>   https://wiki.apache.org/lucene-java/ReleaseNote55
>   https://wiki.apache.org/solr/ReleaseNote55
>
> Feel free to edit!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Wed, Feb 10, 2016 at 5:41 PM, Michael McCandless
>  wrote:
> > OK will do, thanks Uwe!
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> >
> > On Wed, Feb 10, 2016 at 5:01 PM, Uwe Schindler  wrote:
> >> Thanks Mike!
> >>
> >> if any scripts below dev-tools need to be changed for some git stuff,
> could you please mention “LUCENE-6938” in the commit messages, so we can
> easily checrry-pick the changes to other branches (if needed) to older
> branches, too. I did this for my commits, too (mentioned all relevant issue
> numbers), but “LUCENE-6938” is the main one. This allows to cherry pick all
> commits easily (this is what we did with the 5.4 branch).
> >>
> >> Smoker should already work as usual if you use commit hash instead of
> revision number in the release URLs (I fixed ther JAR META-INF folder to do
> that correctly). Maybe we just have to change the "rX" prefix to be a
> plain hash at some places.
> >>
> >> Of course, pushing the release to dist.apache.org web server is still
> using SVN, this did NOT change! Same applies for web page. So I don't think
> there is much to change in dev-tools (I hope).
> >>
> >> Uwe
> >>
> >> -
> >> Uwe Schindler
> >> H.-H.-Meier-Allee 63, D-28213 Bremen
> >> http://www.thetaphi.de
> >> eMail: u...@thetaphi.de
> >>
> >>
> >>> -Original Message-
> >>> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> >>> Sent: Wednesday, February 10, 2016 10:48 PM
> >>> To: Lucene/Solr dev 
> >>> Subject: 5.5.0 release branch cut
> >>>
> >>> I cut the branch (branch_5_5).
> >>>
> >>> Please back-port any blocker issues to it, and please don't push any
> >>> non-blocker changes.
> >>>
> >>> I'll now try to wrestle cutting over release scripts to git :)
> >>>
> >>> Mike McCandless
> >>>
> >>> http://blog.mikemccandless.com
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 


Regards,
Varun Thacker


[jira] [Commented] (SOLR-8534) Add generic support for Collection APIs to be async

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142102#comment-15142102
 ] 

ASF subversion and git services commented on SOLR-8534:
---

Commit 14bd3c4d3859719f8c0d5d0edebbf17f36531b72 in lucene-solr's branch 
refs/heads/branch_5x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=14bd3c4 ]

SOLR-8534: Fix SolrJ APIs to add async support


> Add generic support for Collection APIs to be async
> ---
>
> Key: SOLR-8534
> URL: https://issues.apache.org/jira/browse/SOLR-8534
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.5, master
>
> Attachments: SOLR-8534.patch, SOLR-8534.patch, SOLR-8534.patch, 
> SOLR-8534.patch, SOLR-8534_solrJ.patch, SOLR-8534_solrJ.patch
>
>
> Currently only a handful of Collection API calls support the async parameter. 
> I propose to extended support for async to most APIs.
> The Overseer has a generic support for calls to be async. So we should 
> leverage that and make all commands implemented within the 
> OverseerCollectionMessageHandler to support async



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8534) Add generic support for Collection APIs to be async

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142097#comment-15142097
 ] 

ASF subversion and git services commented on SOLR-8534:
---

Commit 9985a0966ba33f78b0889b00cd81cd6c5a858111 in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9985a09 ]

SOLR-8534: Fix SolrJ APIs to add async support


> Add generic support for Collection APIs to be async
> ---
>
> Key: SOLR-8534
> URL: https://issues.apache.org/jira/browse/SOLR-8534
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.5, master
>
> Attachments: SOLR-8534.patch, SOLR-8534.patch, SOLR-8534.patch, 
> SOLR-8534.patch, SOLR-8534_solrJ.patch, SOLR-8534_solrJ.patch
>
>
> Currently only a handful of Collection API calls support the async parameter. 
> I propose to extended support for async to most APIs.
> The Overseer has a generic support for calls to be async. So we should 
> leverage that and make all commands implemented within the 
> OverseerCollectionMessageHandler to support async



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142095#comment-15142095
 ] 

Shai Erera commented on SOLR-8621:
--

[~cpoerschke] thx for fixing that typo! And your latest commit looks fine to 
me. +1 to get it in.

> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work left-to-do summary:+
>  * {color:red}WrapperMergePolicyFactory setter logic tweak/mini-bug (and test 
> case){color} - Christine
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/server/solr/configsets
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8420) Date statistics: sumOfSquares overflows long

2016-02-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142019#comment-15142019
 ] 

Tomás Fernández Löbbe commented on SOLR-8420:
-

While looking at this patch I noticed that in the line 841 of 
{{TestDistributedSearch}} it says:
{code:java}
rsp = query("q", "*:*", "rows", "0", "stats", "true",
{code}
but was intended to be
{code:java}
rsp = query("q", q, "rows", "0", "stats", "true",
{code}
We should fix that as part of this Jira too.

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, StdDev.java
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8657) SolrRequestInfo logs an error if QuerySenderListener is being used

2016-02-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141993#comment-15141993
 ] 

Tomás Fernández Löbbe commented on SOLR-8657:
-

I'm a bit confused by this error, where in 
{{MDCAwareThreadPoolExecutor.execute()}} do you say this is being set? I can't 
find that. Also, from the screen shot you showed it says that the previous 
request is the commit request and the warming is happening on a different 
thread.
I looked briefly at some tests that could hit this and they didn't, but I may 
be missing something. Maybe you can provide a test case? Maybe some 
modification in {{TestIndexSearcher}}?

> SolrRequestInfo logs an error if QuerySenderListener is being used
> --
>
> Key: SOLR-8657
> URL: https://issues.apache.org/jira/browse/SOLR-8657
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
> Attachments: Screen Shot 2016-02-10 at 09.43.56.png
>
>
> This is the stack trace:
> {code}
> at 
> org.apache.solr.request.SolrRequestInfo.setRequestInfo(SolrRequestInfo.java:59)
> at 
> org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:68)
> at org.apache.solr.core.SolrCore$6.call(SolrCore.java:1859)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> SolrRequestInfo is being set in MDCAwareThreadPoolExecutor.execute() and 
> later in QuerySenderListener.newSearcher() in the same thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8534) Add generic support for Collection APIs to be async

2016-02-10 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8534:

Attachment: SOLR-8534_solrJ.patch

Patch which folds in Anshum's changes. I'll commit this soon

> Add generic support for Collection APIs to be async
> ---
>
> Key: SOLR-8534
> URL: https://issues.apache.org/jira/browse/SOLR-8534
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.5, master
>
> Attachments: SOLR-8534.patch, SOLR-8534.patch, SOLR-8534.patch, 
> SOLR-8534.patch, SOLR-8534_solrJ.patch, SOLR-8534_solrJ.patch
>
>
> Currently only a handful of Collection API calls support the async parameter. 
> I propose to extended support for async to most APIs.
> The Overseer has a generic support for calls to be async. So we should 
> leverage that and make all commands implemented within the 
> OverseerCollectionMessageHandler to support async



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8642) SOLR allows creation of collections with invalid names

2016-02-10 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141956#comment-15141956
 ] 

Jason Gerlowski edited comment on SOLR-8642 at 2/10/16 11:40 PM:
-

I guess that's the question I'm actually asking: does rejecting names with 
hyphens make sense?  I'm sure there was a reason that the Solr recommendations 
warned against using hyphens when they were initially written.  Does anyone 
know what that rationale was, whether it's still valid, or where I could go to 
read up on it?

I don't have anything for or against them in names personally.  Just wanted to 
double-check (if I can) that we're not being unnecessarily restrictive.

(Sorry for bringing this up after the fact by the way; probably should've 
looked into this before uploading my patch.)


was (Author: gerlowskija):
I guess that's the question I'm actually asking: does rejecting names with 
hyphens make sense?  I'm sure there was a reason that the Solr recommendations 
warned against using hyphens when they were initially written.  Does anyone 
know what that rationale was, whether it's still valid, or where I could go to 
read up on it?

I don't have anything for or against them in names personally.  Just wanted to 
double-check (if I can) that we're not being unnecessarily restrictive.

> SOLR allows creation of collections with invalid names
> --
>
> Key: SOLR-8642
> URL: https://issues.apache.org/jira/browse/SOLR-8642
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 5.5, master
>
> Attachments: SOLR-8642.patch, SOLR-8642.patch, SOLR-8642.patch, 
> SOLR-8642.patch
>
>
> Some of my colleagues and I recently noticed that the CREATECOLLECTION API 
> will create a collection even when invalid characters are present in the name.
> For example, consider the following reproduction case, which involves 
> creating a collection with a space in its name:
> {code}
> $ 
> $ bin/solr start -e cloud -noprompt
> ...
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=getting+started&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted";
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">299 name="failure">org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica2': Unable to create core [getting 
> started_shard2_replica2] Caused by: Invalid core name: 'getting 
> started_shard2_replica2' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica1': Unable to create core [getting 
> started_shard2_replica1] Caused by: Invalid core name: 'getting 
> started_shard2_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica1': Unable to create core [getting 
> started_shard1_replica1] Caused by: Invalid core name: 'getting 
> started_shard1_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica2': Unable to create core [getting 
> started_shard1_replica2] Caused by: Invalid core name: 'getting 
> started_shard1_replica2' Names must consist entirely of periods, underscores 
> and alphanumerics
> 
> $ 
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=true";
> HTTP/1.1 200 OK
> Content-Type: application/json; charset=UTF-8
> Transfer-Encoding: chunked
> {
>   "responseHeader":{
> "status":0,
> "QTime":6},
>   "cluster":{
> "collections":{
>  ...
>   "getting started":{
> "replicationFactor":"2",
> "shards":{
>   "shard1":{
> "range":"8000-",
> "state":"active",
> "replicas":{}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{}}},
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"2",
> "autoAddReplicas":"false",
> "znodeVersion":1,
> "conf

[jira] [Commented] (SOLR-8642) SOLR allows creation of collections with invalid names

2016-02-10 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141956#comment-15141956
 ] 

Jason Gerlowski commented on SOLR-8642:
---

I guess that's the question I'm actually asking: does rejecting names with 
hyphens make sense?  I'm sure there was a reason that the Solr recommendations 
warned against using hyphens when they were initially written.  Does anyone 
know what that rationale was, whether it's still valid, or where I could go to 
read up on it?

I don't have anything for or against them in names personally.  Just wanted to 
double-check (if I can) that we're not being unnecessarily restrictive.

> SOLR allows creation of collections with invalid names
> --
>
> Key: SOLR-8642
> URL: https://issues.apache.org/jira/browse/SOLR-8642
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 5.5, master
>
> Attachments: SOLR-8642.patch, SOLR-8642.patch, SOLR-8642.patch, 
> SOLR-8642.patch
>
>
> Some of my colleagues and I recently noticed that the CREATECOLLECTION API 
> will create a collection even when invalid characters are present in the name.
> For example, consider the following reproduction case, which involves 
> creating a collection with a space in its name:
> {code}
> $ 
> $ bin/solr start -e cloud -noprompt
> ...
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=getting+started&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted";
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">299 name="failure">org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica2': Unable to create core [getting 
> started_shard2_replica2] Caused by: Invalid core name: 'getting 
> started_shard2_replica2' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica1': Unable to create core [getting 
> started_shard2_replica1] Caused by: Invalid core name: 'getting 
> started_shard2_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica1': Unable to create core [getting 
> started_shard1_replica1] Caused by: Invalid core name: 'getting 
> started_shard1_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica2': Unable to create core [getting 
> started_shard1_replica2] Caused by: Invalid core name: 'getting 
> started_shard1_replica2' Names must consist entirely of periods, underscores 
> and alphanumerics
> 
> $ 
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=true";
> HTTP/1.1 200 OK
> Content-Type: application/json; charset=UTF-8
> Transfer-Encoding: chunked
> {
>   "responseHeader":{
> "status":0,
> "QTime":6},
>   "cluster":{
> "collections":{
>  ...
>   "getting started":{
> "replicationFactor":"2",
> "shards":{
>   "shard1":{
> "range":"8000-",
> "state":"active",
> "replicas":{}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{}}},
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"2",
> "autoAddReplicas":"false",
> "znodeVersion":1,
> "configName":"gettingstarted"},
> "live_nodes":["127.0.1.1:8983_solr",
>   "127.0.1.1:7574_solr"]}}
> {code}
> The commands/responses above suggest that Solr creates the collection without 
> checking the name.  It then goes on to create the cores for the collection, 
> which fails and returns the error seen above.
> I verified this by doing a {{curl -i -l -k 
> "http://localhost:8983/solr/admin/cores}}; as expected the cores were not 
> actually created.  (This is probably thanks to Erick's work on SOLR-8308).
> This bug is a problem because it will create collections which can never be 
> backed up with actual cores.
> Seems like the same name-verification that 8308 ad

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_72) - Build # 15844 - Failure!

2016-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15844/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:38582/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38582/awholynewcollection_0: non ok status: 
500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([193633F911513F23:91620C23BFAD52DB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:512)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1773)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:743)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.ran

[jira] [Commented] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr

2016-02-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141942#comment-15141942
 ] 

Christine Poerschke commented on SOLR-5730:
---

{{SOLR-5730-part1and2.patch}} attached against latest master/trunk. If there 
are no further comments or concerns and if all else goes well then that will be 
the final patch for this ticket, and I will commit it Thursday 
afternoon/evening or Friday morning London time.

> make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector 
> configurable in Solr
> --
>
> Key: SOLR-5730
> URL: https://issues.apache.org/jira/browse/SOLR-5730
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>  Labels: blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-5730-part1and2.patch, SOLR-5730-part1of2.patch, 
> SOLR-5730-part1of2.patch, SOLR-5730-part2of2.patch, SOLR-5730-part2of2.patch
>
>
> *Example configuration (solrconfig.xml) :*
> {noformat}
> -
> +
> +  in
> +  org.apache.solr.index.TieredMergePolicyFactory
> +  timestamp desc
> +
> {noformat}
> *Example use (EarlyTerminatingSortingCollector):*
> {noformat}
> &sort=timestamp+desc&segmentTerminateEarly=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr

2016-02-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-5730:
--
Description: 
*Example configuration (solrconfig.xml) :*
{noformat}
-
+
+  in
+  org.apache.solr.index.TieredMergePolicyFactory
+  timestamp desc
+
{noformat}

*Example use (EarlyTerminatingSortingCollector):*
{noformat}
&sort=timestamp+desc&segmentTerminateEarly=true
{noformat}

  was:
*Example configuration (solrconfig.xml) - corresponding to current 
[jira/solr-5730-master|https://github.com/apache/lucene-solr/tree/jira/solr-5730-master]
 work-in-progress branch:*
{noformat}
-
+
+  in
+  org.apache.solr.index.TieredMergePolicyFactory
+  timestamp desc
+
{noformat}

*Example use (EarlyTerminatingSortingCollector):*
{noformat}
&sort=timestamp+desc&segmentTerminateEarly=true
{noformat}


> make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector 
> configurable in Solr
> --
>
> Key: SOLR-5730
> URL: https://issues.apache.org/jira/browse/SOLR-5730
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>  Labels: blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-5730-part1and2.patch, SOLR-5730-part1of2.patch, 
> SOLR-5730-part1of2.patch, SOLR-5730-part2of2.patch, SOLR-5730-part2of2.patch
>
>
> *Example configuration (solrconfig.xml) :*
> {noformat}
> -
> +
> +  in
> +  org.apache.solr.index.TieredMergePolicyFactory
> +  timestamp desc
> +
> {noformat}
> *Example use (EarlyTerminatingSortingCollector):*
> {noformat}
> &sort=timestamp+desc&segmentTerminateEarly=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ZK Connection Failure leads to stale data

2016-02-10 Thread david.w.smi...@gmail.com
Both sound very good to me, Dennis. Thanks.
On Wed, Feb 10, 2016 at 11:55 AM Dennis Gove  wrote:

> Just wanted to take a moment to get anyone's thoughts on the following
> issues
>
> https://issues.apache.org/jira/browse/SOLR-8599
> https://issues.apache.org/jira/browse/SOLR-8666
>
> The originating problem occurred due to a DNS failure that caused some
> nodes in a cloud setup to fail to connect to zookeeper. Those nodes were
> running but were not participating in the cloud with the other nodes. The
> disconnected nodes would respond to queries with stale data, though they
> would reject injest requests.
>
> Ticket https://issues.apache.org/jira/browse/SOLR-8599 contains a patch
> which ensures that if a connection to zookeeper fails to be made it will be
> retried. Previously the failure wasn't leading to a retry so the node would
> just run and be disconnect until the node itself was restarted.
>
> Ticket https://issues.apache.org/jira/browse/SOLR-8666 contains a patch
> which will result in additional information returned to the client when a
> node may be returning stale data due to not being connected to zookeeper.
> The intent was to not change current behavior but allow the client to know
> that something might be wrong. In situations where the collection is not
> being updated the data may not be stale so it wouldn't matter if the node
> is disconnected from zookeeper but in situations where the collection is
> being updated then the data may be stale. The headers of the response will
> now contain an entry to indicate this. Also, adds a header to the ping
> response to also provide notification if the node is disconnected from
> zookeeper.
>
> I think the approach these patches take are good but wanted to get others'
> thoughts and perhaps I'm missing a case where these might cause a problem.
>
> Thanks - Dennis
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-5730) make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector configurable in Solr

2016-02-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-5730:
--
Attachment: SOLR-5730-part1and2.patch

> make Lucene's SortingMergePolicy and EarlyTerminatingSortingCollector 
> configurable in Solr
> --
>
> Key: SOLR-5730
> URL: https://issues.apache.org/jira/browse/SOLR-5730
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>  Labels: blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-5730-part1and2.patch, SOLR-5730-part1of2.patch, 
> SOLR-5730-part1of2.patch, SOLR-5730-part2of2.patch, SOLR-5730-part2of2.patch
>
>
> *Example configuration (solrconfig.xml) - corresponding to current 
> [jira/solr-5730-master|https://github.com/apache/lucene-solr/tree/jira/solr-5730-master]
>  work-in-progress branch:*
> {noformat}
> -
> +
> +  in
> +  org.apache.solr.index.TieredMergePolicyFactory
> +  timestamp desc
> +
> {noformat}
> *Example use (EarlyTerminatingSortingCollector):*
> {noformat}
> &sort=timestamp+desc&segmentTerminateEarly=true
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) The collections create API should return after all replicas are active.

2016-02-10 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141911#comment-15141911
 ] 

Michael Sun commented on SOLR-8416:
---

I see. Thanks [~markrmil...@gmail.com].

> The collections create API should return after all replicas are active. 
> 
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>Assignee: Mark Miller
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch, 
> SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141889#comment-15141889
 ] 

ASF subversion and git services commented on LUCENE-6938:
-

Commit 70e61fd9e04ba0312b9c1d3f6d6e8313ab0dce75 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=70e61fd ]

LUCENE-6938: switch from svn to git


> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141887#comment-15141887
 ] 

ASF subversion and git services commented on LUCENE-6938:
-

Commit 8b71a1baf5b9c6d16d24134cebeaf7f22333580d in lucene-solr's branch 
refs/heads/branch_5x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b71a1b ]

LUCENE-6938: switch from svn to git


> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141886#comment-15141886
 ] 

ASF subversion and git services commented on LUCENE-6938:
-

Commit 7a329d4e299f364a716ca7e3d786684f280d0100 in lucene-solr's branch 
refs/heads/branch_5_5 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a329d4 ]

LUCENE-6938: switch from svn to git


> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.5.0 release branch cut

2016-02-10 Thread Michael McCandless
I created place-holder release notes:

  https://wiki.apache.org/lucene-java/ReleaseNote55
  https://wiki.apache.org/solr/ReleaseNote55

Feel free to edit!

Mike McCandless

http://blog.mikemccandless.com


On Wed, Feb 10, 2016 at 5:41 PM, Michael McCandless
 wrote:
> OK will do, thanks Uwe!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Wed, Feb 10, 2016 at 5:01 PM, Uwe Schindler  wrote:
>> Thanks Mike!
>>
>> if any scripts below dev-tools need to be changed for some git stuff, could 
>> you please mention “LUCENE-6938” in the commit messages, so we can easily 
>> checrry-pick the changes to other branches (if needed) to older branches, 
>> too. I did this for my commits, too (mentioned all relevant issue numbers), 
>> but “LUCENE-6938” is the main one. This allows to cherry pick all commits 
>> easily (this is what we did with the 5.4 branch).
>>
>> Smoker should already work as usual if you use commit hash instead of 
>> revision number in the release URLs (I fixed ther JAR META-INF folder to do 
>> that correctly). Maybe we just have to change the "rX" prefix to be a 
>> plain hash at some places.
>>
>> Of course, pushing the release to dist.apache.org web server is still using 
>> SVN, this did NOT change! Same applies for web page. So I don't think there 
>> is much to change in dev-tools (I hope).
>>
>> Uwe
>>
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>>
>>> -Original Message-
>>> From: Michael McCandless [mailto:luc...@mikemccandless.com]
>>> Sent: Wednesday, February 10, 2016 10:48 PM
>>> To: Lucene/Solr dev 
>>> Subject: 5.5.0 release branch cut
>>>
>>> I cut the branch (branch_5_5).
>>>
>>> Please back-port any blocker issues to it, and please don't push any
>>> non-blocker changes.
>>>
>>> I'll now try to wrestle cutting over release scripts to git :)
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.5.0 release branch cut

2016-02-10 Thread Michael McCandless
OK will do, thanks Uwe!

Mike McCandless

http://blog.mikemccandless.com


On Wed, Feb 10, 2016 at 5:01 PM, Uwe Schindler  wrote:
> Thanks Mike!
>
> if any scripts below dev-tools need to be changed for some git stuff, could 
> you please mention “LUCENE-6938” in the commit messages, so we can easily 
> checrry-pick the changes to other branches (if needed) to older branches, 
> too. I did this for my commits, too (mentioned all relevant issue numbers), 
> but “LUCENE-6938” is the main one. This allows to cherry pick all commits 
> easily (this is what we did with the 5.4 branch).
>
> Smoker should already work as usual if you use commit hash instead of 
> revision number in the release URLs (I fixed ther JAR META-INF folder to do 
> that correctly). Maybe we just have to change the "rX" prefix to be a 
> plain hash at some places.
>
> Of course, pushing the release to dist.apache.org web server is still using 
> SVN, this did NOT change! Same applies for web page. So I don't think there 
> is much to change in dev-tools (I hope).
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>> -Original Message-
>> From: Michael McCandless [mailto:luc...@mikemccandless.com]
>> Sent: Wednesday, February 10, 2016 10:48 PM
>> To: Lucene/Solr dev 
>> Subject: 5.5.0 release branch cut
>>
>> I cut the branch (branch_5_5).
>>
>> Please back-port any blocker issues to it, and please don't push any
>> non-blocker changes.
>>
>> I'll now try to wrestle cutting over release scripts to git :)
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141836#comment-15141836
 ] 

Christine Poerschke commented on SOLR-8621:
---

[~shaie] - would you have any thoughts re: 
[6b6932e8e1f72caf29a078f0532a56c284711f9f|https://git1-us-west.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=6b6932e8]
 commit above? If it is logical and reasonable then I will cherry-pick it to 
branch_5x and branch_5_5 tomorrow/Thursday. Thanks!

> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work left-to-do summary:+
>  * {color:red}WrapperMergePolicyFactory setter logic tweak/mini-bug (and test 
> case){color} - Christine
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/server/solr/configsets
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141829#comment-15141829
 ] 

ASF subversion and git services commented on SOLR-8621:
---

Commit 6b6932e8e1f72caf29a078f0532a56c284711f9f in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6b6932e ]

SOLR-8621: WrapperMergePolicyFactory logic tweaks

 * fix so that getMergePolicy() can now be called more than once
 * added WrapperMergePolicyFactoryTest.testUpgradeIndexMergePolicyFactory()
 * account for overlap between wrapping and wrapped setters (and disallow it)
** illustration:
   
  0.24
  mergePolicy
  TieredMergePolicyFactory
  0.42
   
** implementation details: the wrapping MP's setter calls the wrapped MP's 
setter and in the current code the wrapping MP's value prevails i.e. the 0.24 
value in the illustration since the wrapped MP is constructed before the 
wrapping MP. an end-user however might reasonably assume that the wrapped MP's 
0.42 value will prevail. at best configuring the same setter twice within the 
same overall  element is ambiguous and so the code now 
disallows it.


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work left-to-do summary:+
>  * {color:red}WrapperMergePolicyFactory setter logic tweak/mini-bug (and test 
> case){color} - Christine
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/server/solr/configsets
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8642) SOLR allows creation of collections with invalid names

2016-02-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141821#comment-15141821
 ] 

Shawn Heisey commented on SOLR-8642:


bq. I was surprised that hyphens ("-") aren't allowed.

I use hyphens in some of my core names, but only for the shards on one of my 
three indexes.

If it makes sense to exclude them, then please don't let my mistake change that 
plan.  I can change my names.


> SOLR allows creation of collections with invalid names
> --
>
> Key: SOLR-8642
> URL: https://issues.apache.org/jira/browse/SOLR-8642
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 5.5, master
>
> Attachments: SOLR-8642.patch, SOLR-8642.patch, SOLR-8642.patch, 
> SOLR-8642.patch
>
>
> Some of my colleagues and I recently noticed that the CREATECOLLECTION API 
> will create a collection even when invalid characters are present in the name.
> For example, consider the following reproduction case, which involves 
> creating a collection with a space in its name:
> {code}
> $ 
> $ bin/solr start -e cloud -noprompt
> ...
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=getting+started&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted";
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">299 name="failure">org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica2': Unable to create core [getting 
> started_shard2_replica2] Caused by: Invalid core name: 'getting 
> started_shard2_replica2' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica1': Unable to create core [getting 
> started_shard2_replica1] Caused by: Invalid core name: 'getting 
> started_shard2_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica1': Unable to create core [getting 
> started_shard1_replica1] Caused by: Invalid core name: 'getting 
> started_shard1_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica2': Unable to create core [getting 
> started_shard1_replica2] Caused by: Invalid core name: 'getting 
> started_shard1_replica2' Names must consist entirely of periods, underscores 
> and alphanumerics
> 
> $ 
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=true";
> HTTP/1.1 200 OK
> Content-Type: application/json; charset=UTF-8
> Transfer-Encoding: chunked
> {
>   "responseHeader":{
> "status":0,
> "QTime":6},
>   "cluster":{
> "collections":{
>  ...
>   "getting started":{
> "replicationFactor":"2",
> "shards":{
>   "shard1":{
> "range":"8000-",
> "state":"active",
> "replicas":{}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{}}},
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"2",
> "autoAddReplicas":"false",
> "znodeVersion":1,
> "configName":"gettingstarted"},
> "live_nodes":["127.0.1.1:8983_solr",
>   "127.0.1.1:7574_solr"]}}
> {code}
> The commands/responses above suggest that Solr creates the collection without 
> checking the name.  It then goes on to create the cores for the collection, 
> which fails and returns the error seen above.
> I verified this by doing a {{curl -i -l -k 
> "http://localhost:8983/solr/admin/cores}}; as expected the cores were not 
> actually created.  (This is probably thanks to Erick's work on SOLR-8308).
> This bug is a problem because it will create collections which can never be 
> backed up with actual cores.
> Seems like the same name-verification that 8308 added to cores should also be 
> applied to collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...

[jira] [Comment Edited] (SOLR-8642) SOLR allows creation of collections with invalid names

2016-02-10 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141786#comment-15141786
 ] 

Jason Gerlowski edited comment on SOLR-8642 at 2/10/16 10:07 PM:
-

So, two questions now that I've gotten a little distance from this:

- This JIRA added verification to CREATECOLLECTION and CREATEALIAS.  Should we 
also be adding this sort of verification to CREATESHARD?  It takes a "shard" 
parameter that is probably subject to the same pitfalls that collection names 
are.  Maybe this is a different issue, and it shouldn't be lumped in with 
collection names.  Just wanted to bring it up.  Happy to spin that discussion 
off into a separate JIRA if needed.

- When I initially looked at the regex, I was surprised that hyphens ("-") 
aren't allowed.  Seems like a common character to disallow.  Does anyone know 
of a JIRA where I can read more about where the recommendations came from?  
Just curious to see how the recommendations arose.


was (Author: gerlowskija):
So, two questions now that I've gotten a little distance from this:

- This JIRA added verification to CREATECOLLECTION and CREATEALIAS.  Should we 
also be adding this sort of verification to CREATESHARD?  It takes a "shard" 
parameter that is probably subject to the same pitfalls that collection names 
are.  Maybe this is a different issue, and it shouldn't be lumped in with 
collection names.  Just wanted to bring it up.  Happy to spin that discussion 
off into a separate JIRA if needed.

- When I initially looked at the regex, I was surprised that hyphens (-) aren't 
allowed.  Seems like a common character to disallow.  Does anyone know of a 
JIRA where I can read more about where the recommendations came from?  Just 
curious to see how the recommendations arose.

> SOLR allows creation of collections with invalid names
> --
>
> Key: SOLR-8642
> URL: https://issues.apache.org/jira/browse/SOLR-8642
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 5.5, master
>
> Attachments: SOLR-8642.patch, SOLR-8642.patch, SOLR-8642.patch, 
> SOLR-8642.patch
>
>
> Some of my colleagues and I recently noticed that the CREATECOLLECTION API 
> will create a collection even when invalid characters are present in the name.
> For example, consider the following reproduction case, which involves 
> creating a collection with a space in its name:
> {code}
> $ 
> $ bin/solr start -e cloud -noprompt
> ...
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=getting+started&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted";
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">299 name="failure">org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica2': Unable to create core [getting 
> started_shard2_replica2] Caused by: Invalid core name: 'getting 
> started_shard2_replica2' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica1': Unable to create core [getting 
> started_shard2_replica1] Caused by: Invalid core name: 'getting 
> started_shard2_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica1': Unable to create core [getting 
> started_shard1_replica1] Caused by: Invalid core name: 'getting 
> started_shard1_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica2': Unable to create core [getting 
> started_shard1_replica2] Caused by: Invalid core name: 'getting 
> started_shard1_replica2' Names must consist entirely of periods, underscores 
> and alphanumerics
> 
> $ 
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=true";
> HTTP/1.1 200 OK
> Content-Type: application/json; charset=UTF-8
> Transfer-Encoding: chunked
> {
>   "responseHeader":{
> "status":0,
> "QTime":6},
>   "cluster":{
> "collections":{
>  ...
>   "getting

[jira] [Commented] (SOLR-8642) SOLR allows creation of collections with invalid names

2016-02-10 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141786#comment-15141786
 ] 

Jason Gerlowski commented on SOLR-8642:
---

So, two questions now that I've gotten a little distance from this:

- This JIRA added verification to CREATECOLLECTION and CREATEALIAS.  Should we 
also be adding this sort of verification to CREATESHARD?  It takes a "shard" 
parameter that is probably subject to the same pitfalls that collection names 
are.  Maybe this is a different issue, and it shouldn't be lumped in with 
collection names.  Just wanted to bring it up.  Happy to spin that discussion 
off into a separate JIRA if needed.

- When I initially looked at the regex, I was surprised that hyphens (-) aren't 
allowed.  Seems like a common character to disallow.  Does anyone know of a 
JIRA where I can read more about where the recommendations came from?  Just 
curious to see how the recommendations arose.

> SOLR allows creation of collections with invalid names
> --
>
> Key: SOLR-8642
> URL: https://issues.apache.org/jira/browse/SOLR-8642
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 5.5, master
>
> Attachments: SOLR-8642.patch, SOLR-8642.patch, SOLR-8642.patch, 
> SOLR-8642.patch
>
>
> Some of my colleagues and I recently noticed that the CREATECOLLECTION API 
> will create a collection even when invalid characters are present in the name.
> For example, consider the following reproduction case, which involves 
> creating a collection with a space in its name:
> {code}
> $ 
> $ bin/solr start -e cloud -noprompt
> ...
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=getting+started&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted";
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Transfer-Encoding: chunked
> 
> 
> 0 name="QTime">299 name="failure">org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica2': Unable to create core [getting 
> started_shard2_replica2] Caused by: Invalid core name: 'getting 
> started_shard2_replica2' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard2_replica1': Unable to create core [getting 
> started_shard2_replica1] Caused by: Invalid core name: 'getting 
> started_shard2_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:7574/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica1': Unable to create core [getting 
> started_shard1_replica1] Caused by: Invalid core name: 'getting 
> started_shard1_replica1' Names must consist entirely of periods, underscores 
> and 
> alphanumericsorg.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 'getting 
> started_shard1_replica2': Unable to create core [getting 
> started_shard1_replica2] Caused by: Invalid core name: 'getting 
> started_shard1_replica2' Names must consist entirely of periods, underscores 
> and alphanumerics
> 
> $ 
> $ curl -i -l -k -X GET 
> "http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=true";
> HTTP/1.1 200 OK
> Content-Type: application/json; charset=UTF-8
> Transfer-Encoding: chunked
> {
>   "responseHeader":{
> "status":0,
> "QTime":6},
>   "cluster":{
> "collections":{
>  ...
>   "getting started":{
> "replicationFactor":"2",
> "shards":{
>   "shard1":{
> "range":"8000-",
> "state":"active",
> "replicas":{}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{}}},
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"2",
> "autoAddReplicas":"false",
> "znodeVersion":1,
> "configName":"gettingstarted"},
> "live_nodes":["127.0.1.1:8983_solr",
>   "127.0.1.1:7574_solr"]}}
> {code}
> The commands/responses above suggest that Solr creates the collection without 
> checking the name.  It then goes on to create the cores for the collection, 
> which fails and returns the error seen above.
> I verified this by doing a {{curl -i -l -k 
> "http://localhost

RE: 5.5.0 release branch cut

2016-02-10 Thread Uwe Schindler
Thanks Mike!

if any scripts below dev-tools need to be changed for some git stuff, could you 
please mention “LUCENE-6938” in the commit messages, so we can easily 
checrry-pick the changes to other branches (if needed) to older branches, too. 
I did this for my commits, too (mentioned all relevant issue numbers), but 
“LUCENE-6938” is the main one. This allows to cherry pick all commits easily 
(this is what we did with the 5.4 branch).

Smoker should already work as usual if you use commit hash instead of revision 
number in the release URLs (I fixed ther JAR META-INF folder to do that 
correctly). Maybe we just have to change the "rX" prefix to be a plain hash 
at some places.

Of course, pushing the release to dist.apache.org web server is still using 
SVN, this did NOT change! Same applies for web page. So I don't think there is 
much to change in dev-tools (I hope).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Wednesday, February 10, 2016 10:48 PM
> To: Lucene/Solr dev 
> Subject: 5.5.0 release branch cut
> 
> I cut the branch (branch_5_5).
> 
> Please back-port any blocker issues to it, and please don't push any
> non-blocker changes.
> 
> I'll now try to wrestle cutting over release scripts to git :)
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



5.5.0 release branch cut

2016-02-10 Thread Michael McCandless
I cut the branch (branch_5_5).

Please back-port any blocker issues to it, and please don't push any
non-blocker changes.

I'll now try to wrestle cutting over release scripts to git :)

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8621:
--
Priority: Major  (was: Blocker)

> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work left-to-do summary:+
>  * {color:red}WrapperMergePolicyFactory setter logic tweak/mini-bug (and test 
> case){color} - Christine
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/server/solr/configsets
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8621:
--
Description: 
* end-user benefits:*
* Lucene's UpgradeIndexMergePolicy can be configured in Solr
* (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
* customisability: arbitrary merge policies including wrapping/nested merge 
policies can be created and configured

*(proposed) roadmap:*
* solr 5.5 introduces  support
* solr 5.5(\?) deprecates (but maintains)  support
* solr 6.0(\?) removes  support 

+work left-to-do summary:+
 * {color:red}WrapperMergePolicyFactory setter logic tweak/mini-bug (and test 
case){color} - Christine
 * Solr Reference Guide changes (directly in Confluence?)
 * changes to remaining solrconfig.xml
 ** solr/core/src/test-files/solr/collection1/conf - Christine
 ** solr/server/solr/configsets

+open question:+
 * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on Feb 
1st. The code as-is permits mergePolicy irrespective of luceneMatchVersion, I 
think.

  was:
* end-user benefits:*
* Lucene's UpgradeIndexMergePolicy can be configured in Solr
* (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
* customisability: arbitrary merge policies including wrapping/nested merge 
policies can be created and configured

*(proposed) roadmap:*
* solr 5.5 introduces  support
* solr 5.5(\?) deprecates (but maintains)  support
* solr 6.0(\?) removes  support 

+work-in-progress summary:+
 * main code changes have been committed to master and branch_5x
 * {color:red}further small code change required:{color} MergePolicyFactory 
constructor or MergePolicyFactory.getMergePolicy method to take IndexSchema 
argument (e.g. for use by SortingMergePolicyFactory being added under related 
SOLR-5730)
 * Solr Reference Guide changes (directly in Confluence?)
 * changes to remaining solrconfig.xml
 ** solr/core/src/test-files/solr/collection1/conf - Christine
 ** solr/contrib
 ** solr/example

+open question:+
 * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on Feb 
1st. The code as-is permits mergePolicy irrespective of luceneMatchVersion, I 
think.


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work left-to-do summary:+
>  * {color:red}WrapperMergePolicyFactory setter logic tweak/mini-bug (and test 
> case){color} - Christine
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/server/solr/configsets
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8671) Date statistics: make "sum" a double instead of a long/date

2016-02-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-8671:

Fix Version/s: master

> Date statistics: make "sum" a double instead of a long/date
> ---
>
> Key: SOLR-8671
> URL: https://issues.apache.org/jira/browse/SOLR-8671
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Fix For: master
>
>
> Currently {{DateStatsValues#sum}} is defined as long, and returned as a date. 
> This has two problems: It overflows (with ~6 million values), and the return 
> value can be a date like {{122366-06-12T21:06:06Z}}. 
> I think we should just change this stat to a double. See SOLR-8420.
> I think we can change this only in master, since it will break backward 
> compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8420) Date statistics: sumOfSquares overflows long

2016-02-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141720#comment-15141720
 ] 

Tomás Fernández Löbbe commented on SOLR-8420:
-

I created SOLR-8671 for this particular change

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, StdDev.java
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8671) Date statistics: make "sum" a double instead of a long/date

2016-02-10 Thread JIRA
Tomás Fernández Löbbe created SOLR-8671:
---

 Summary: Date statistics: make "sum" a double instead of a 
long/date
 Key: SOLR-8671
 URL: https://issues.apache.org/jira/browse/SOLR-8671
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe


Currently {{DateStatsValues#sum}} is defined as long, and returned as a date. 
This has two problems: It overflows (with ~6 million values), and the return 
value can be a date like {{122366-06-12T21:06:06Z}}. I 
think we should just change this stat to a double. See SOLR-8420.
I think we can change this only in master, since it will break backward 
compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2016-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Component/s: SolrCloud

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.5, master
>
> Attachments: SOLR-8371-2.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, SOLR-8371.patch, 
> SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141711#comment-15141711
 ] 

ASF subversion and git services commented on SOLR-8621:
---

Commit bbbc90f58b2f336c4c51f4844cd0f63121c76ccf in lucene-solr's branch 
refs/heads/branch_5x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bbbc90f ]

SOLR-8621: fix mergePolicyFacory vs. mergePolicyFactory typos in comments in 
solr/contrib and solr/example solrconfig.xml files.


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work-in-progress summary:+
>  * main code changes have been committed to master and branch_5x
>  * {color:red}further small code change required:{color} MergePolicyFactory 
> constructor or MergePolicyFactory.getMergePolicy method to take IndexSchema 
> argument (e.g. for use by SortingMergePolicyFactory being added under related 
> SOLR-5730)
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/contrib
>  ** solr/example
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141709#comment-15141709
 ] 

ASF subversion and git services commented on SOLR-8621:
---

Commit 588e3ff0842a5d021cff09aa72d94b0b5de45ca9 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=588e3ff ]

SOLR-8621: fix mergePolicyFacory vs. mergePolicyFactory typos in comments in 
solr/contrib and solr/example solrconfig.xml files.


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work-in-progress summary:+
>  * main code changes have been committed to master and branch_5x
>  * {color:red}further small code change required:{color} MergePolicyFactory 
> constructor or MergePolicyFactory.getMergePolicy method to take IndexSchema 
> argument (e.g. for use by SortingMergePolicyFactory being added under related 
> SOLR-5730)
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/contrib
>  ** solr/example
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.4.2 Bug Fix Branch?

2016-02-10 Thread Nicholas Knize
Good idea. I'll leave the branch but not proceed with an official 5.4.2
release.

On Wed, Feb 10, 2016 at 3:00 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Thanks Nick.
>
> I don't think you should delete the branch?  If disaster strikes and
> somehow we need to cut a branch, the already back-ported fixes are
> there...
>
> I'll cut the 5.5 branch now.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Wed, Feb 10, 2016 at 3:58 PM, Nicholas Knize  wrote:
> > To a point made earlier in the thread the 5.5 release is right around the
> > corner. So I decided against complicating things with a 5.4.2 release.
> > Unless there are any objections I'll go ahead and delete branch_5_4.
> >
> > On Mon, Feb 8, 2016 at 5:18 PM, Nicholas Knize  wrote:
> >>
> >> > Maybe once the release has proven to work, we should attach the
> complete
> >> > DIFF to the issue.
> >>
> >> +1
> >>
> >> On Mon, Feb 8, 2016 at 5:15 PM, Uwe Schindler  wrote:
> >>>
> >>> Thanks, I had done the same a minute ago. Results look identical: 7
> >>> commits
> >>>
> >>>
> >>>
> >>> If we need to further fix the release script / smoker, we should ALWAYS
> >>> mention “LUCENE-6938” in the commit messages, so we can easily
> backport the
> >>> changes to other branches (if needed). Smoker should work as usual if
> you
> >>> use commit hash instead of revision number.
> >>>
> >>>
> >>>
> >>> Maybe once the release has proven to work, we should attach the
> complete
> >>> DIFF to the issue.
> >>>
> >>>
> >>>
> >>> Thanks,
> >>>
> >>> Uwe
> >>>
> >>>
> >>>
> >>> -
> >>>
> >>> Uwe Schindler
> >>>
> >>> H.-H.-Meier-Allee 63, D-28213 Bremen
> >>>
> >>> http://www.thetaphi.de
> >>>
> >>> eMail: u...@thetaphi.de
> >>>
> >>>
> >>>
> >>> From: Nicholas Knize [mailto:nkn...@gmail.com]
> >>> Sent: Tuesday, February 09, 2016 12:07 AM
> >>>
> >>>
> >>> To: Lucene/Solr dev 
> >>> Subject: Re: 5.4.2 Bug Fix Branch?
> >>>
> >>>
> >>>
> >>> Hi Uwe, I had already created and pushed branch_5_4. I've got those 7
> >>> commits cherry-picked and ready for push if you want to review?
> >>>
> >>>
> >>>
> >>> On Mon, Feb 8, 2016 at 4:51 PM, Uwe Schindler  wrote:
> >>>
> >>> Hi,
> >>>
> >>> in any case, you have to cherry-pick the series of commits from
> branch_5x
> >>> which were added to setup the new GIT repo to fix the build. Namely
> these
> >>> are commits with "LUCENE-6938" in the name (very simple indeed).
> >>>
> >>> I can do that in a minute, ok?
> >>>
> >>> Here are the commits:
> >>>
> >>> Revision: 424a647af4d093915108221bcd4390989303b426
> >>> Author: Uwe Schindler 
> >>> Date: 26.01.2016 22:06:35
> >>> Message:
> >>> LUCENE-6995, LUCENE-6938: Add branch change trigger to common-build.xml
> >>> to keep sane build on GIT branch change
> >>>
> >>>
> >>> 
> >>> Modified: lucene/common-build.xml
> >>>
> >>> Revision: 9d35aafc565a880c5cae7c21fa6c10fbdd0399ec
> >>> Author: Uwe Schindler 
> >>> Date: 24.01.2016 22:05:38
> >>> Message:
> >>> LUCENE-6938: Add WC checks back, now based on JGit
> >>>
> >>>
> >>> 
> >>> Modified: build.xml
> >>>
> >>> Revision: b18d2b333035245cd9edac55d4ca5e6b5b0759e6
> >>> Author: Uwe Schindler 
> >>> Date: 24.01.2016 00:03:25
> >>> Message:
> >>> LUCENE-6938: Improve output of Git Hash if no GIT available or no GIT
> >>> checkout (this restores previous behaviour)
> >>>
> >>>
> >>> 
> >>> Modified: lucene/common-build.xml
> >>>
> >>> Revision: 7f41c65ae994d3962de6a2abf4e82e54e4b80502
> >>> Author: Steve Rowe 
> >>> Date: 23.01.2016 22:29:28
> >>> Message:
> >>> LUCENE-6938: Maven build: Switch SCM descriptors from svn to git;
> >>> buildnumber-maven-plugin's buildNumberPropertyName property (used in
> >>> Maven-built artifact manifests) renamed from svn.revision to
> checkoutid;
> >>> removed Subversion-specific stuff from README.maven
> >>>
> >>>
> >>> 
> >>> Modified: dev-tools/maven/README.maven
> >>> Modified: dev-tools/maven/lucene/analysis/common/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/icu/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/kuromoji/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/morfologik/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/phonetic/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/smartcn/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/stempel/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/analysis/uima/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/backward-codecs/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/benchmark/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/classification/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/codecs/src/java/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/core/src/java/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/demo/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/facet/pom.xml.template
> >>> Modified: dev-tools/maven/lucene/grouping/pom.xml.temp

Re: 5.4.2 Bug Fix Branch?

2016-02-10 Thread Michael McCandless
Thanks Nick.

I don't think you should delete the branch?  If disaster strikes and
somehow we need to cut a branch, the already back-ported fixes are
there...

I'll cut the 5.5 branch now.

Mike McCandless

http://blog.mikemccandless.com

On Wed, Feb 10, 2016 at 3:58 PM, Nicholas Knize  wrote:
> To a point made earlier in the thread the 5.5 release is right around the
> corner. So I decided against complicating things with a 5.4.2 release.
> Unless there are any objections I'll go ahead and delete branch_5_4.
>
> On Mon, Feb 8, 2016 at 5:18 PM, Nicholas Knize  wrote:
>>
>> > Maybe once the release has proven to work, we should attach the complete
>> > DIFF to the issue.
>>
>> +1
>>
>> On Mon, Feb 8, 2016 at 5:15 PM, Uwe Schindler  wrote:
>>>
>>> Thanks, I had done the same a minute ago. Results look identical: 7
>>> commits
>>>
>>>
>>>
>>> If we need to further fix the release script / smoker, we should ALWAYS
>>> mention “LUCENE-6938” in the commit messages, so we can easily backport the
>>> changes to other branches (if needed). Smoker should work as usual if you
>>> use commit hash instead of revision number.
>>>
>>>
>>>
>>> Maybe once the release has proven to work, we should attach the complete
>>> DIFF to the issue.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Uwe
>>>
>>>
>>>
>>> -
>>>
>>> Uwe Schindler
>>>
>>> H.-H.-Meier-Allee 63, D-28213 Bremen
>>>
>>> http://www.thetaphi.de
>>>
>>> eMail: u...@thetaphi.de
>>>
>>>
>>>
>>> From: Nicholas Knize [mailto:nkn...@gmail.com]
>>> Sent: Tuesday, February 09, 2016 12:07 AM
>>>
>>>
>>> To: Lucene/Solr dev 
>>> Subject: Re: 5.4.2 Bug Fix Branch?
>>>
>>>
>>>
>>> Hi Uwe, I had already created and pushed branch_5_4. I've got those 7
>>> commits cherry-picked and ready for push if you want to review?
>>>
>>>
>>>
>>> On Mon, Feb 8, 2016 at 4:51 PM, Uwe Schindler  wrote:
>>>
>>> Hi,
>>>
>>> in any case, you have to cherry-pick the series of commits from branch_5x
>>> which were added to setup the new GIT repo to fix the build. Namely these
>>> are commits with "LUCENE-6938" in the name (very simple indeed).
>>>
>>> I can do that in a minute, ok?
>>>
>>> Here are the commits:
>>>
>>> Revision: 424a647af4d093915108221bcd4390989303b426
>>> Author: Uwe Schindler 
>>> Date: 26.01.2016 22:06:35
>>> Message:
>>> LUCENE-6995, LUCENE-6938: Add branch change trigger to common-build.xml
>>> to keep sane build on GIT branch change
>>>
>>>
>>> 
>>> Modified: lucene/common-build.xml
>>>
>>> Revision: 9d35aafc565a880c5cae7c21fa6c10fbdd0399ec
>>> Author: Uwe Schindler 
>>> Date: 24.01.2016 22:05:38
>>> Message:
>>> LUCENE-6938: Add WC checks back, now based on JGit
>>>
>>>
>>> 
>>> Modified: build.xml
>>>
>>> Revision: b18d2b333035245cd9edac55d4ca5e6b5b0759e6
>>> Author: Uwe Schindler 
>>> Date: 24.01.2016 00:03:25
>>> Message:
>>> LUCENE-6938: Improve output of Git Hash if no GIT available or no GIT
>>> checkout (this restores previous behaviour)
>>>
>>>
>>> 
>>> Modified: lucene/common-build.xml
>>>
>>> Revision: 7f41c65ae994d3962de6a2abf4e82e54e4b80502
>>> Author: Steve Rowe 
>>> Date: 23.01.2016 22:29:28
>>> Message:
>>> LUCENE-6938: Maven build: Switch SCM descriptors from svn to git;
>>> buildnumber-maven-plugin's buildNumberPropertyName property (used in
>>> Maven-built artifact manifests) renamed from svn.revision to checkoutid;
>>> removed Subversion-specific stuff from README.maven
>>>
>>>
>>> 
>>> Modified: dev-tools/maven/README.maven
>>> Modified: dev-tools/maven/lucene/analysis/common/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/icu/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/kuromoji/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/morfologik/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/phonetic/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/smartcn/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/stempel/pom.xml.template
>>> Modified: dev-tools/maven/lucene/analysis/uima/pom.xml.template
>>> Modified: dev-tools/maven/lucene/backward-codecs/pom.xml.template
>>> Modified: dev-tools/maven/lucene/benchmark/pom.xml.template
>>> Modified: dev-tools/maven/lucene/classification/pom.xml.template
>>> Modified: dev-tools/maven/lucene/codecs/src/java/pom.xml.template
>>> Modified: dev-tools/maven/lucene/core/src/java/pom.xml.template
>>> Modified: dev-tools/maven/lucene/demo/pom.xml.template
>>> Modified: dev-tools/maven/lucene/facet/pom.xml.template
>>> Modified: dev-tools/maven/lucene/grouping/pom.xml.template
>>> Modified: dev-tools/maven/lucene/highlighter/pom.xml.template
>>> Modified: dev-tools/maven/lucene/join/pom.xml.template
>>> Modified: dev-tools/maven/lucene/memory/pom.xml.template
>>> Modified: dev-tools/maven/lucene/misc/pom.xml.template
>>> Modified: dev-tools/maven/lucene/pom.xml.template
>>> Modified: dev-tools/maven/lucene/queries/pom.xml.template
>>> Modified: dev-tools/maven/lucene/queryparser/pom.xml.template
>>> Modified: dev-tools/maven/lucene/repli

Re: 5.4.2 Bug Fix Branch?

2016-02-10 Thread Nicholas Knize
To a point made earlier in the thread the 5.5 release is right around the
corner. So I decided against complicating things with a 5.4.2 release.
Unless there are any objections I'll go ahead and delete branch_5_4.

On Mon, Feb 8, 2016 at 5:18 PM, Nicholas Knize  wrote:

> > Maybe once the release has proven to work, we should attach the
> complete DIFF to the issue.
>
> +1
>
> On Mon, Feb 8, 2016 at 5:15 PM, Uwe Schindler  wrote:
>
>> Thanks, I had done the same a minute ago. Results look identical: 7
>> commits
>>
>>
>>
>> If we need to further fix the release script / smoker, we should ALWAYS
>> mention “LUCENE-6938” in the commit messages, so we can easily backport the
>> changes to other branches (if needed). Smoker should work as usual if you
>> use commit hash instead of revision number.
>>
>>
>>
>> Maybe once the release has proven to work, we should attach the complete
>> DIFF to the issue.
>>
>>
>>
>> Thanks,
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* Nicholas Knize [mailto:nkn...@gmail.com]
>> *Sent:* Tuesday, February 09, 2016 12:07 AM
>>
>> *To:* Lucene/Solr dev 
>> *Subject:* Re: 5.4.2 Bug Fix Branch?
>>
>>
>>
>> Hi Uwe, I had already created and pushed branch_5_4. I've got those 7
>> commits cherry-picked and ready for push if you want to review?
>>
>>
>>
>> On Mon, Feb 8, 2016 at 4:51 PM, Uwe Schindler  wrote:
>>
>> Hi,
>>
>> in any case, you have to cherry-pick the series of commits from branch_5x
>> which were added to setup the new GIT repo to fix the build. Namely these
>> are commits with "LUCENE-6938" in the name (very simple indeed).
>>
>> I can do that in a minute, ok?
>>
>> Here are the commits:
>>
>> Revision: 424a647af4d093915108221bcd4390989303b426
>> Author: Uwe Schindler 
>> Date: 26.01.2016 22:06:35
>> Message:
>> LUCENE-6995, LUCENE-6938: Add branch change trigger to common-build.xml
>> to keep sane build on GIT branch change
>>
>>
>> 
>> Modified: lucene/common-build.xml
>>
>> Revision: 9d35aafc565a880c5cae7c21fa6c10fbdd0399ec
>> Author: Uwe Schindler 
>> Date: 24.01.2016 22:05:38
>> Message:
>> LUCENE-6938: Add WC checks back, now based on JGit
>>
>>
>> 
>> Modified: build.xml
>>
>> Revision: b18d2b333035245cd9edac55d4ca5e6b5b0759e6
>> Author: Uwe Schindler 
>> Date: 24.01.2016 00:03:25
>> Message:
>> LUCENE-6938: Improve output of Git Hash if no GIT available or no GIT
>> checkout (this restores previous behaviour)
>>
>>
>> 
>> Modified: lucene/common-build.xml
>>
>> Revision: 7f41c65ae994d3962de6a2abf4e82e54e4b80502
>> Author: Steve Rowe 
>> Date: 23.01.2016 22:29:28
>> Message:
>> LUCENE-6938: Maven build: Switch SCM descriptors from svn to git;
>> buildnumber-maven-plugin's buildNumberPropertyName property (used in
>> Maven-built artifact manifests) renamed from svn.revision to checkoutid;
>> removed Subversion-specific stuff from README.maven
>>
>>
>> 
>> Modified: dev-tools/maven/README.maven
>> Modified: dev-tools/maven/lucene/analysis/common/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/icu/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/kuromoji/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/morfologik/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/phonetic/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/smartcn/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/stempel/pom.xml.template
>> Modified: dev-tools/maven/lucene/analysis/uima/pom.xml.template
>> Modified: dev-tools/maven/lucene/backward-codecs/pom.xml.template
>> Modified: dev-tools/maven/lucene/benchmark/pom.xml.template
>> Modified: dev-tools/maven/lucene/classification/pom.xml.template
>> Modified: dev-tools/maven/lucene/codecs/src/java/pom.xml.template
>> Modified: dev-tools/maven/lucene/core/src/java/pom.xml.template
>> Modified: dev-tools/maven/lucene/demo/pom.xml.template
>> Modified: dev-tools/maven/lucene/facet/pom.xml.template
>> Modified: dev-tools/maven/lucene/grouping/pom.xml.template
>> Modified: dev-tools/maven/lucene/highlighter/pom.xml.template
>> Modified: dev-tools/maven/lucene/join/pom.xml.template
>> Modified: dev-tools/maven/lucene/memory/pom.xml.template
>> Modified: dev-tools/maven/lucene/misc/pom.xml.template
>> Modified: dev-tools/maven/lucene/pom.xml.template
>> Modified: dev-tools/maven/lucene/queries/pom.xml.template
>> Modified: dev-tools/maven/lucene/queryparser/pom.xml.template
>> Modified: dev-tools/maven/lucene/replicator/pom.xml.template
>> Modified: dev-tools/maven/lucene/sandbox/pom.xml.template
>> Modified: dev-tools/maven/lucene/suggest/pom.xml.template
>> Modified: dev-tools/maven/lucene/test-framework/pom.xml.template
>> Modified: dev-tools/maven/pom.xml.template
>> Modified: dev-tools/maven/solr/contrib/analysis-extras/pom.xml.template
>> Modified: dev-tools/maven/solr/contrib/analytics/pom.xml.template
>> Modified: dev-tools/maven/solr/con

[jira] [Updated] (SOLR-8669) Non binary responses use chunked encoding because we flush the outputstream early.

2016-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8669:
--
Issue Type: Improvement  (was: Bug)

> Non binary responses use chunked encoding because we flush the outputstream 
> early.
> --
>
> Key: SOLR-8669
> URL: https://issues.apache.org/jira/browse/SOLR-8669
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-8669.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8578) Successful or not, requests are not always fully consumed by Solrj clients and we count on HttpClient or the JVM.

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141611#comment-15141611
 ] 

ASF subversion and git services commented on SOLR-8578:
---

Commit a8bc427aac85d600e1abee28bb373f428c08c7ae in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a8bc427 ]

SOLR-8578: Successful or not, requests are not always fully consumed by Solrj 
clients and we count on HttpClient or the JVM.


> Successful or not, requests are not always fully consumed by Solrj clients 
> and we count on HttpClient or the JVM.
> -
>
> Key: SOLR-8578
> URL: https://issues.apache.org/jira/browse/SOLR-8578
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8578.patch, SOLR-8578.patch
>
>
> Does not seem to happen with XML response parser.
> Not the largest deal because HttpClient appears to consume unread bytes from 
> the stream for us, but something seems off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8578) Successful or not, requests are not always fully consumed by Solrj clients and we count on HttpClient or the JVM.

2016-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8578:
--
Attachment: SOLR-8578.patch

> Successful or not, requests are not always fully consumed by Solrj clients 
> and we count on HttpClient or the JVM.
> -
>
> Key: SOLR-8578
> URL: https://issues.apache.org/jira/browse/SOLR-8578
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8578.patch, SOLR-8578.patch
>
>
> Does not seem to happen with XML response parser.
> Not the largest deal because HttpClient appears to consume unread bytes from 
> the stream for us, but something seems off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8578) Successful or not, requests are not always fully consumed by Solrj clients and we count on HttpClient or the JVM.

2016-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8578:
--
Summary: Successful or not, requests are not always fully consumed by Solrj 
clients and we count on HttpClient or the JVM.  (was: Successful or not, 
requests are not fully consumed by Solrj clients and we count on HttpClient or 
the JVM.)

> Successful or not, requests are not always fully consumed by Solrj clients 
> and we count on HttpClient or the JVM.
> -
>
> Key: SOLR-8578
> URL: https://issues.apache.org/jira/browse/SOLR-8578
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8578.patch
>
>
> Does not seem to happen with XML response parser.
> Not the largest deal because HttpClient appears to consume unread bytes from 
> the stream for us, but something seems off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) The collections create API should return after all replicas are active.

2016-02-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141566#comment-15141566
 ] 

Mark Miller commented on SOLR-8416:
---

bq. Are they pointing to the same exception? 
rsp.getException happens when the response is not code 200 okay - but the 
collections API works a little different in that sometimes it will put failures 
and exceptions in a call that returns 200 okay. We just check both cases.

bq. Also there seems a typo error that return is not included.

Not waiting for individual replicas that did not create is left as a TODO 
there, we don't want to do anything yet.

I also moved the waiting code to the CollectionsHandler. I think it's more 
efficient and 'safer' to pull the waiting out of the Overseer processing.

I think we can commit this as a good start and use further JIRA's to make any 
improvements.

> The collections create API should return after all replicas are active. 
> 
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>Assignee: Mark Miller
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch, 
> SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 15537 - Failure!

2016-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15537/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([68B83D687D84F5F:B800E579FEA2416A]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration(ClusterStateUpdateTest.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11503 lines...]
   [junit4] Suite: org.apache.solr.cloud.ClusterStateUpdateTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ClusterStateUpdateTest_68B83D687D84F5F-001/init-core-data-001
   [junit4] 

[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI

2016-02-10 Thread Mary Jo Sminkey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141506#comment-15141506
 ] 

Mary Jo Sminkey commented on SOLR-4146:
---

Drat... not for long apparently. Can't access the UI again due to this. 

> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: master
>Reporter: Markus Jelsma
> Fix For: master
>
> Attachments: solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: 
> java.util.concurrent.RejectedExecutionException
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711)
> ... 18 more
> Caused by: java.util.concurrent.RejectedExecutionException
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605)
> ... 22 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4146) Error handling 'status' action, cannot access GUI

2016-02-10 Thread Mary Jo Sminkey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141506#comment-15141506
 ] 

Mary Jo Sminkey edited comment on SOLR-4146 at 2/10/16 7:42 PM:


Drat... not for long apparently. Can't access the UI again due to this. We had 
been running on 5.4.0 for a month or so without seeing these errors, so this 
version of it may have been introduced in 5.5. 


was (Author: mjsminkey):
Drat... not for long apparently. Can't access the UI again due to this. 

> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: master
>Reporter: Markus Jelsma
> Fix For: master
>
> Attachments: solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: 
> java.util.concurrent.RejectedExecutionException
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711)
> ... 18 more
> Caused by: java.util.concurrent.RejectedExecutionException
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605)
> ... 22 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI

2016-02-10 Thread Mary Jo Sminkey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141512#comment-15141512
 ] 

Mary Jo Sminkey commented on SOLR-4146:
---

I also noticed the missing file has changed: 
null:java.nio.file.NoSuchFileException: 
/opt/solr-5.5.0-1057/server/solr/classic_search/data/index/segments_kl4

> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: master
>Reporter: Markus Jelsma
> Fix For: master
>
> Attachments: solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: 
> java.util.concurrent.RejectedExecutionException
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711)
> ... 18 more
> Caused by: java.util.concurrent.RejectedExecutionException
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605)
> ... 22 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) The collections create API should return after all replicas are active.

2016-02-10 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141508#comment-15141508
 ] 

Michael Sun commented on SOLR-8416:
---

Ah yes, it makes sense to skip waiting for replicas to be alive for async calls 
or in case there is failure. 

One question, it uses both rsp.getException() and 
response.getResponse().get("exception"). Are they pointing to the same 
exception? Thanks.

Also there seems a typo error that return is not included.
{code}
if (response.getResponse().get("failure") != null) {
  // TODO: we should not wait for Replicas we know failed
}
{code}

> The collections create API should return after all replicas are active. 
> 
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>Assignee: Mark Miller
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch, 
> SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI

2016-02-10 Thread Mary Jo Sminkey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141400#comment-15141400
 ] 

Mary Jo Sminkey commented on SOLR-4146:
---

In our case, a restart of the server cleared it up. 

> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: master
>Reporter: Markus Jelsma
> Fix For: master
>
> Attachments: solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: 
> java.util.concurrent.RejectedExecutionException
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711)
> ... 18 more
> Caused by: java.util.concurrent.RejectedExecutionException
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605)
> ... 22 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8670) Upgrade from Solr version 5.3.2 to 5.4.1 failed

2016-02-10 Thread Vivek Narang (JIRA)
Vivek Narang created SOLR-8670:
--

 Summary: Upgrade from Solr version 5.3.2 to 5.4.1 failed
 Key: SOLR-8670
 URL: https://issues.apache.org/jira/browse/SOLR-8670
 Project: Solr
  Issue Type: Bug
Reporter: Vivek Narang


Upgrade from 5.3.2 to 5.4.1 failed

Upgrade test conducted with a help of program. Please find more details at 
[https://github.com/viveknarang/solr-upgrade-tests]

Please find logs for this test at: [http://106.186.125.89/log.tar.gz]

A significant section of the log for quick reference below ...

.. WARN  (main) [   ] o.e.j.u.c.AbstractLifeCycle FAILED 
Zookeeper@d6da972c==org.apache.solr.servlet.ZookeeperInfoServlet,-1,false: 
javax.servlet.UnavailableException: org.apache.solr.servlet.ZookeeperInfoServlet
javax.servlet.UnavailableException: org.apache.solr.servlet.ZookeeperInfoServlet
at org.eclipse.jetty.servlet.BaseHolder.doStart(BaseHolder.java:102)
at 
org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:338) 
...




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-02-10 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141360#comment-15141360
 ] 

Dennis Gove edited comment on SOLR-8599 at 2/10/16 6:16 PM:


This fails the forbidden api precommit check.
{code}
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newSingleThreadScheduledExecutor() [Spawns 
threads with vague names; use a custom thread factory (Lucene's 
NamedThreadFactory, Solr's DefaultSolrThreadFactory) and name threads so that 
you can tell (by its name) which executor it is associated with]
[forbidden-apis]   in org.apache.solr.cloud.ConnectionManagerTest 
(ConnectionManagerTest.java:119)
{code}


was (Author: dpgove):
This fais the forbidden api precommit check.
{code}
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newSingleThreadScheduledExecutor() [Spawns 
threads with vague names; use a custom thread factory (Lucene's 
NamedThreadFactory, Solr's DefaultSolrThreadFactory) and name threads so that 
you can tell (by its name) which executor it is associated with]
[forbidden-apis]   in org.apache.solr.cloud.ConnectionManagerTest 
(ConnectionManagerTest.java:119)
{code}

> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
> Attachments: SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-02-10 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141360#comment-15141360
 ] 

Dennis Gove commented on SOLR-8599:
---

This fais the forbidden api precommit check.
{code}
[forbidden-apis] Forbidden method invocation: 
java.util.concurrent.Executors#newSingleThreadScheduledExecutor() [Spawns 
threads with vague names; use a custom thread factory (Lucene's 
NamedThreadFactory, Solr's DefaultSolrThreadFactory) and name threads so that 
you can tell (by its name) which executor it is associated with]
[forbidden-apis]   in org.apache.solr.cloud.ConnectionManagerTest 
(ConnectionManagerTest.java:119)
{code}

> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
> Attachments: SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI

2016-02-10 Thread Mary Jo Sminkey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141345#comment-15141345
 ] 

Mary Jo Sminkey commented on SOLR-4146:
---

Just hit this error on a dev server we're testing 5.5. on. Same issue, trying 
to access the Core Admin in the UI. Seems to be due to a missing file. 

null:org.apache.solr.common.SolrException: Error handling 'status' action 
at 
org.apache.solr.handler.admin.CoreAdminOperation$4.call(CoreAdminOperation.java:188)
at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:354)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:153)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:676)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:439)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.NoSuchFileException: 
/opt/solr-5.5.0-1057/server/solr/classic_search/data/index/segments_kgw
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.Files.size(Files.java:2332)
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:210)
at 
org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:127)
at 
org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:592)
at 
org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(CoreAdminOperation.java:882)
at 
org.apache.solr.handler.admin.CoreAdminOperation$4.call(CoreAdminOperation.java:176)


> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: master
>Reporter: Markus Jelsma
> Fix For: master
>
> Attachments: solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
>  

Re: confluence line spacing

2016-02-10 Thread Cassandra Targett
It's worth a look, but there valid reasons for those tags -  is for
paragraphs, and  is used in valid cases for defining when a URL
should *not* be a hyperlink. Inserting a  tag when you hit enter is
valid, IMO, because most of the time you want some extra spacing to
differentiate paragraphs.

Part of the problem is that we want that list on the Collections API page
to behave as a list, without actually *being* a list with proper HTML tags.
That's just going to cause little glitches until we change the design of
API pages.

On Wed, Feb 10, 2016 at 8:31 AM, Anshum Gupta 
wrote:

> Thanks Cassandra! :)
>
> I don't think it's just copy-paste. Just a plain return key tends to add
> the  and the  tags it seems then. I'll create a JIRA and request
> for access.
> Though I'm no front-end/css expert, I'll try to see if it can be fixed at
> the root so it doesn't require us to manually remove these tags every time
> someone edits the page. If you or someone else knows of a way, please do so.
>
> On Wed, Feb 10, 2016 at 7:38 AM, Cassandra Targett 
> wrote:
>
>> I fixed it. Adding the line inserted  and  tags which were
>> messing up the line spacing. Using copy/paste tends to insert that stuff -
>> it's a serious PITA.
>>
>> The only way to see this was to use the source editing mode, where you
>> can view the XHTML source of the page. In order to see that, you need to be
>> on the whitelist for Confluence's source editing plugin, which is managed
>> by INFRA. You can file an issue to get added if you want, as in this issue:
>> https://issues.apache.org/jira/browse/INFRA-8224.
>>
>> Once you're on the list, you should see an icon on the right side of the
>> editing window that looks like "< >"; click that and you see the XHTML
>> source.
>>
>> On Tue, Feb 9, 2016 at 3:13 PM, Anshum Gupta 
>> wrote:
>>
>>> I just added documentation for DELETESTATUS API on the CollectionsAdmin
>>> page in the ref guide but couldn't get the line spacing to be the same as
>>> the rest of the lines. Is there something I'm missing?
>>>
>>> --
>>> Anshum Gupta
>>>
>>
>>
>
>
> --
> Anshum Gupta
>


[jira] [Resolved] (SOLR-8658) Fix test failure introduced in SOLR-8651

2016-02-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-8658.
--
   Resolution: Fixed
Fix Version/s: 6.0
   5.5

I think this is OK, if it crops back up we can open a new JIRA or re-open this 
one.

> Fix test failure introduced in SOLR-8651
> 
>
> Key: SOLR-8658
> URL: https://issues.apache.org/jira/browse/SOLR-8658
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8658.patch
>
>
> OK, I think I've found a possible reason. The waitForDocCount method waits 
> until a response comes back with the, well, expected doc counts. But then it 
> drops out of the wait loop the first time a query works.
> But then it goes out to each and every node and re-issues the request. This 
> looks to be a 2-shard, 2-replica situation. So here's the theory: the second 
> node hasn't yet opened a new searcher. So the wait loop is satisfied by, say, 
> node2 but the test later looks at node4 (both for shard2) which hasn't 
> completed opening a searcher yet so it fails.
> I could not get this to fail locally in 20 runs. So I'll beast the unchanged 
> version some more to see but meanwhile commit this change which I think is 
> more correct anyway and monitor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8349) Allow sharing of large in memory data structures across cores

2016-02-10 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141171#comment-15141171
 ] 

Gus Heck commented on SOLR-8349:


Thanks for the feedback Dave, I generally like guava caches, so this sounds 
like a good idea. I seem to recall that the Lucene part was motivated by the 
desire to have this fix SOLR-3443. I suspect it could be pulled out if needed. 
Should the Lucene ticket simply point to this or do I need to generate a 
separate patch for that that breaks things out?

> Allow sharing of large in memory data structures across cores
> -
>
> Key: SOLR-8349
> URL: https://issues.apache.org/jira/browse/SOLR-8349
> Project: Solr
>  Issue Type: Improvement
>  Components: Server
>Affects Versions: 5.3
>Reporter: Gus Heck
> Attachments: SOLR-8349.patch
>
>
> In some cases search components or analysis classes may utilize a large 
> dictionary or other in-memory structure. When multiple cores are loaded with 
> identical configurations utilizing this large in memory structure, each core 
> holds it's own copy in memory. This has been noted in the past and a specific 
> case reported in SOLR-3443. This patch provides a generalized capability, and 
> if accepted, this capability will then be used to fix SOLR-3443.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



ZK Connection Failure leads to stale data

2016-02-10 Thread Dennis Gove
Just wanted to take a moment to get anyone's thoughts on the following
issues

https://issues.apache.org/jira/browse/SOLR-8599
https://issues.apache.org/jira/browse/SOLR-8666

The originating problem occurred due to a DNS failure that caused some
nodes in a cloud setup to fail to connect to zookeeper. Those nodes were
running but were not participating in the cloud with the other nodes. The
disconnected nodes would respond to queries with stale data, though they
would reject injest requests.

Ticket https://issues.apache.org/jira/browse/SOLR-8599 contains a patch
which ensures that if a connection to zookeeper fails to be made it will be
retried. Previously the failure wasn't leading to a retry so the node would
just run and be disconnect until the node itself was restarted.

Ticket https://issues.apache.org/jira/browse/SOLR-8666 contains a patch
which will result in additional information returned to the client when a
node may be returning stale data due to not being connected to zookeeper.
The intent was to not change current behavior but allow the client to know
that something might be wrong. In situations where the collection is not
being updated the data may not be stale so it wouldn't matter if the node
is disconnected from zookeeper but in situations where the collection is
being updated then the data may be stale. The headers of the response will
now contain an entry to indicate this. Also, adds a header to the ping
response to also provide notification if the node is disconnected from
zookeeper.

I think the approach these patches take are good but wanted to get others'
thoughts and perhaps I'm missing a case where these might cause a problem.

Thanks - Dennis


[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141137#comment-15141137
 ] 

ASF subversion and git services commented on SOLR-8621:
---

Commit 5d106503e7d4fbb8ac015c4fc723883f4ab7397e in lucene-solr's branch 
refs/heads/branch_5x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d10650 ]

SOLR-8621: add IndexSchema arg to MergePolicyFactory constructor


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work-in-progress summary:+
>  * main code changes have been committed to master and branch_5x
>  * {color:red}further small code change required:{color} MergePolicyFactory 
> constructor or MergePolicyFactory.getMergePolicy method to take IndexSchema 
> argument (e.g. for use by SortingMergePolicyFactory being added under related 
> SOLR-5730)
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/contrib
>  ** solr/example
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141128#comment-15141128
 ] 

Christine Poerschke commented on SOLR-8621:
---

bq. ... Am I missing something?

No, that list of bullet points sounds right to me. Additional/Remaining test 
coverage to be done separately, yes, that makes sense and I hope that by end of 
today we can 'un-blocker' for 5.5 here. Before then, I seem to have discovered 
issues with the (wrapper?) factory and the setters i.e. them not taking effect 
as expected, hoping to have test case and fix shortly.

> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work-in-progress summary:+
>  * main code changes have been committed to master and branch_5x
>  * {color:red}further small code change required:{color} MergePolicyFactory 
> constructor or MergePolicyFactory.getMergePolicy method to take IndexSchema 
> argument (e.g. for use by SortingMergePolicyFactory being added under related 
> SOLR-5730)
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/contrib
>  ** solr/example
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7018) GeoPoint queries on multi-valued GeoPointField documents can be slow

2016-02-10 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize resolved LUCENE-7018.

Resolution: Done

> GeoPoint queries on multi-valued GeoPointField documents can be slow
> 
>
> Key: LUCENE-7018
> URL: https://issues.apache.org/jira/browse/LUCENE-7018
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.4, 5.4.1
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: 5.4.2
>
>
> In 5.4/5.4.1 a known bug remains for GeoPoint queries. When filtering over 
> docvalues for a multi-valued document all values were checked regardless of 
> an existing match. This performance bug was fixed in LUCENE-6951 and needs to 
> be back ported to 5.4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8621) solrconfig.xml: deprecate/replace with

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141116#comment-15141116
 ] 

ASF subversion and git services commented on SOLR-8621:
---

Commit 5d32609cdc413e15619a94d8d508987a65514e7e in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d32609 ]

SOLR-8621: add IndexSchema arg to MergePolicyFactory constructor


> solrconfig.xml: deprecate/replace  with 
> -
>
> Key: SOLR-8621
> URL: https://issues.apache.org/jira/browse/SOLR-8621
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.5, master
>
> Attachments: SOLR-8621-example_contrib_configs.patch, 
> SOLR-8621-example_contrib_configs.patch, SOLR-8621.patch, 
> explicit-merge-auto-set.patch
>
>
> * end-user benefits:*
> * Lucene's UpgradeIndexMergePolicy can be configured in Solr
> * (with SOLR-5730) Lucene's SortingMergePolicy can be configured in Solr
> * customisability: arbitrary merge policies including wrapping/nested merge 
> policies can be created and configured
> *(proposed) roadmap:*
> * solr 5.5 introduces  support
> * solr 5.5(\?) deprecates (but maintains)  support
> * solr 6.0(\?) removes  support 
> +work-in-progress summary:+
>  * main code changes have been committed to master and branch_5x
>  * {color:red}further small code change required:{color} MergePolicyFactory 
> constructor or MergePolicyFactory.getMergePolicy method to take IndexSchema 
> argument (e.g. for use by SortingMergePolicyFactory being added under related 
> SOLR-5730)
>  * Solr Reference Guide changes (directly in Confluence?)
>  * changes to remaining solrconfig.xml
>  ** solr/core/src/test-files/solr/collection1/conf - Christine
>  ** solr/contrib
>  ** solr/example
> +open question:+
>  * Do we want to error if luceneMatchVersion >= 5.5 and deprecated 
> mergePolicy/mergeFactor/maxMergeDocs are used? See [~hossman]'s comment on 
> Feb 1st. The code as-is permits mergePolicy irrespective of 
> luceneMatchVersion, I think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141092#comment-15141092
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit 041cd9483ec082bc3848cd400c62d50092fc5016 in lucene-solr's branch 
refs/heads/branch_5_4 from [~mikemccand]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=041cd94 ]

LUCENE-6932: fix test bug that was not always using the dir impl being tested; 
fix SimpleFSIndexInput to throw EOFException if you seek beyond end of file

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1726277 
13f79535-47bb-0310-9956-ffa450edef68


> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8578) Successful or not, requests are not fully consumed by Solrj clients and we count on HttpClient or the JVM.

2016-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8578:
--
Attachment: SOLR-8578.patch

Patch attached.

> Successful or not, requests are not fully consumed by Solrj clients and we 
> count on HttpClient or the JVM.
> --
>
> Key: SOLR-8578
> URL: https://issues.apache.org/jira/browse/SOLR-8578
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8578.patch
>
>
> Does not seem to happen with XML response parser.
> Not the largest deal because HttpClient appears to consume unread bytes from 
> the stream for us, but something seems off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8669) Non binary responses use chunked encoding because we flush the outputstream early.

2016-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8669:
--
Attachment: SOLR-8669.patch

Patch attached.

> Non binary responses use chunked encoding because we flush the outputstream 
> early.
> --
>
> Key: SOLR-8669
> URL: https://issues.apache.org/jira/browse/SOLR-8669
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-8669.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: confluence line spacing

2016-02-10 Thread Anshum Gupta
Thanks Cassandra! :)

I don't think it's just copy-paste. Just a plain return key tends to add
the  and the  tags it seems then. I'll create a JIRA and request
for access.
Though I'm no front-end/css expert, I'll try to see if it can be fixed at
the root so it doesn't require us to manually remove these tags every time
someone edits the page. If you or someone else knows of a way, please do so.

On Wed, Feb 10, 2016 at 7:38 AM, Cassandra Targett 
wrote:

> I fixed it. Adding the line inserted  and  tags which were
> messing up the line spacing. Using copy/paste tends to insert that stuff -
> it's a serious PITA.
>
> The only way to see this was to use the source editing mode, where you can
> view the XHTML source of the page. In order to see that, you need to be on
> the whitelist for Confluence's source editing plugin, which is managed by
> INFRA. You can file an issue to get added if you want, as in this issue:
> https://issues.apache.org/jira/browse/INFRA-8224.
>
> Once you're on the list, you should see an icon on the right side of the
> editing window that looks like "< >"; click that and you see the XHTML
> source.
>
> On Tue, Feb 9, 2016 at 3:13 PM, Anshum Gupta 
> wrote:
>
>> I just added documentation for DELETESTATUS API on the CollectionsAdmin
>> page in the ref guide but couldn't get the line spacing to be the same as
>> the rest of the lines. Is there something I'm missing?
>>
>> --
>> Anshum Gupta
>>
>
>


-- 
Anshum Gupta


[jira] [Resolved] (LUCENE-6998) We should detect truncation for all index files

2016-02-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6998.

Resolution: Fixed

> We should detect truncation for all index files
> ---
>
> Key: LUCENE-6998
> URL: https://issues.apache.org/jira/browse/LUCENE-6998
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.5, master, 6.0, 5.4.2
>
> Attachments: LUCENE-6998.patch, LUCENE-6998.patch, LUCENE-6998.patch
>
>
> [~rcmuir] noticed that {{Lucene60PointReader}} does not detect truncation of 
> its data file on reader init ... so I added a test to catch this, confirmed 
> it caught the bug, and fixed the bug, and fixed a couple other things 
> uncovered by the test.
> I also improved the other {{TestAllFiles*}} tests to use {{LineFileDocs}} so 
> they index points, and it caught another bug in {{Lucene60PointFormat}} 
> (using the same codec name in two files).
> There is more to do here, e.g. we also need a test that catches places where 
> we fail to check the index header on init, which was also missing for 
> {{Lucene60PointReader}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7002) MultiCollector throws NPE when there is CollectTerminatedException is thrown by a subcollector

2016-02-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7002.

Resolution: Fixed

> MultiCollector throws NPE when there is CollectTerminatedException is thrown 
> by a subcollector
> --
>
> Key: LUCENE-7002
> URL: https://issues.apache.org/jira/browse/LUCENE-7002
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.4
>Reporter: John Wang
>Assignee: Adrien Grand
> Fix For: 5.5, 5.4.2
>
> Attachments: LUCENE-7002.patch
>
>
> I am seeing this in our log:
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.lucene.search.MultiCollector$MultiLeafCollector.setScorer(MultiCollector.java:156)
> at 
> org.apache.lucene.search.BooleanScorer$1$1.setScorer(BooleanScorer.java:50)
> at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:166)
> at 
> org.apache.lucene.search.BooleanScorer$1.score(BooleanScorer.java:59)
> at 
> org.apache.lucene.search.BooleanScorer$BulkScorerAndDoc.score(BooleanScorer.java:90)
> at 
> org.apache.lucene.search.BooleanScorer.scoreWindowSingleScorer(BooleanScorer.java:313)
> at 
> org.apache.lucene.search.BooleanScorer.scoreWindow(BooleanScorer.java:336)
> at 
> org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:364)
> at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:821)
> at 
> org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:763)
> at 
> org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:760)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {noformat}
> Looks like 
> {noformat}
> MultiCollector.removeCollector(i)
> {noformat} 
> is called on line 176, the loop:
> {noformat}
> for (LeafCollector c : collectors) {
> c.setScorer(scorer);
> }
> {noformat}
> in setScorer can still step on it, on line 155.
> I am however, unable to reproduce that with a unit test.
> I made a copy of this class and added a null check in setScorer() and the 
> problem goes away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141091#comment-15141091
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit b4fa82b0772718d84db3d177f8ce7450be3c51ac in lucene-solr's branch 
refs/heads/branch_5_4 from [~mikemccand]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b4fa82b ]

LUCENE-6932: also fix NIOFSIndexInput to throw EOFE if you seek beyond end of 
file

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1726231 
13f79535-47bb-0310-9956-ffa450edef68


> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6998) We should detect truncation for all index files

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141095#comment-15141095
 ] 

ASF subversion and git services commented on LUCENE-6998:
-

Commit df30bc6c5b4855fcd95c3660fdd2991d0e9c58bf in lucene-solr's branch 
refs/heads/branch_5_4 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df30bc6 ]

LUCENE-6998: fix a couple places to better detect truncated index files; 
improve corruption testing

Conflicts:

lucene/core/src/java/org/apache/lucene/codecs/lucene60/Lucene60PointFormat.java

lucene/core/src/java/org/apache/lucene/codecs/lucene60/Lucene60PointReader.java

lucene/core/src/java/org/apache/lucene/codecs/lucene60/Lucene60PointWriter.java

Conflicts:
lucene/CHANGES.txt


> We should detect truncation for all index files
> ---
>
> Key: LUCENE-6998
> URL: https://issues.apache.org/jira/browse/LUCENE-6998
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.5, master, 6.0, 5.4.2
>
> Attachments: LUCENE-6998.patch, LUCENE-6998.patch, LUCENE-6998.patch
>
>
> [~rcmuir] noticed that {{Lucene60PointReader}} does not detect truncation of 
> its data file on reader init ... so I added a test to catch this, confirmed 
> it caught the bug, and fixed the bug, and fixed a couple other things 
> uncovered by the test.
> I also improved the other {{TestAllFiles*}} tests to use {{LineFileDocs}} so 
> they index points, and it caught another bug in {{Lucene60PointFormat}} 
> (using the same codec name in two files).
> There is more to do here, e.g. we also need a test that catches places where 
> we fail to check the index header on init, which was also missing for 
> {{Lucene60PointReader}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141093#comment-15141093
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit 2512ab6c1f3089cb8fe534532f0676c3358a5cd4 in lucene-solr's branch 
refs/heads/branch_5_4 from [~mikemccand]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2512ab6 ]

LUCENE-6932: also fix RAFIndexInput to throw EOFE if you seek beyond end of file

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1726290 
13f79535-47bb-0310-9956-ffa450edef68


> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6976) BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null

2016-02-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6976.

Resolution: Fixed

> BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null
> 
>
> Key: LUCENE-6976
> URL: https://issues.apache.org/jira/browse/LUCENE-6976
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.5, 5.4.2
>
> Attachments: LUCENE_6976.patch, LUCENE_6976.patch
>
>
> The BytesTermAttributeImpl class, not used much I think, has a problem in its 
> copyTo method in which it assumes "bytes" isn't null since it calls 
> BytesRef.deepCopyOf on it.  Perhaps deepCopyOf should support null?  And 
> also, toString(), equals() and hashCode() aren't implemented but we can do so.
> This was discovered in SOLR-8541; the spatial PrefixTreeStrategy uses this 
> attribute and the CachingTokenFilter when used on the analysis chain will 
> call clearAttributes() in it's end() method and then capture the state so it 
> can be replayed later.  BytesTermAttributeImpl.clear() nulls out the bytes 
> reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8541) NPE in spatial field highlighting

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141097#comment-15141097
 ] 

ASF subversion and git services commented on SOLR-8541:
---

Commit 7d52c2523c7a4ff70612742b76b934a12b493331 in lucene-solr's branch 
refs/heads/branch_5_4 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d52c25 ]

LUCENE-6976 SOLR-8541: BytesTermAttributeImpl.copyTo could NPE.
Could be triggered by trying to highlight a spatial RPT field.

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1724877 
13f79535-47bb-0310-9956-ffa450edef68

Conflicts:
lucene/CHANGES.txt
solr/CHANGES.txt


> NPE in spatial field highlighting
> -
>
> Key: SOLR-8541
> URL: https://issues.apache.org/jira/browse/SOLR-8541
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 5.4
>Reporter: Pawel Rog
>Assignee: David Smiley
> Fix For: 5.5
>
>
> I prepared a failing test. This worked for 4.x versions. This fails with 
> different stacktrace on 5.1 and 5.0 versions. I'm not sure if it is related 
> to Solr or Lucene. Since the stack trace is different before 5.2 maybe 
> something changed here SOLR-5855?
> test code:
> {code}
> public class Test extends SolrTestCaseJ4 {
>   @BeforeClass
>   public static void beforeClass() throws Exception {
> initCore("solrconfig.xml", "schema.xml");
>   }
>   @Test
>   public void testConstantScoreQueryWithFilterPartOnly() {
> final String[] doc1 = {"id", "1", "location", "56.9485,24.0980"};
> assertU(adoc(doc1));
> assertU(commit());
> ModifiableSolrParams params = new ModifiableSolrParams();
> params.add("q", "{!geofilt sfield=\"location\" pt=\"56.9484,24.0981\" 
> d=100}");
> params.add("hl", "true");
> params.add("hl.fl", "location");
> assertQ(req(params), "*[count(//doc)=1]", 
> "count(//lst[@name='highlighting']/*)=1");
>   }
> }
> {code}
> solrconfig:
> {code}
> 
> ${tests.luceneMatchVersion:LUCENE_CURRENT}
> ${solr.data.dir:}
>  class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
>  class="solr.XMLResponseWriter" />
> 
> 
> 
> {code}
> schema:
> {code}
> 
>omitNorms="true"/>
>sortMissingLast="true" omitNorms="true"/>
>omitNorms="true"/>
>class="solr.SpatialRecursivePrefixTreeFieldType" units="degrees" geo="true" />
>   
>   
>   
>   
>   
>   
>   id
>   string_field
> 
> {code}
> This ends up with:
> {code}
> Exception during query
> java.lang.RuntimeException
>   at org.apache.lucene.util.BytesRef.deepCopyOf(BytesRef.java:281)
>   at 
> org.apache.lucene.analysis.tokenattributes.BytesTermAttributeImpl.copyTo(BytesTermAttributeImpl.java:51)
>   at 
> org.apache.lucene.analysis.tokenattributes.BytesTermAttributeImpl.clone(BytesTermAttributeImpl.java:57)
>   at 
> org.apache.lucene.util.AttributeSource$State.clone(AttributeSource.java:55)
>   at 
> org.apache.lucene.util.AttributeSource.captureState(AttributeSource.java:280)
>   at 
> org.apache.lucene.analysis.CachingTokenFilter.fillCache(CachingTokenFilter.java:96)
>   at 
> org.apache.lucene.analysis.CachingTokenFilter.incrementToken(CachingTokenFilter.java:70)
>   at 
> org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:452)
>   at 
> org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:384)
>   at 
> org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:359)
>   at 
> org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:339)
>   at 
> org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getLeafContext(WeightedSpanTermExtractor.java:384)
>   at 
> org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:215)
>   at 
> org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:506)
>   at 
> org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:219)
>   at 
> org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:187)
>   at 
> org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:196)
>   at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:595)
>   at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:429)
>   at 
> org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:143)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.

[jira] [Resolved] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6932.

Resolution: Fixed

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141094#comment-15141094
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit 3100f1b187ffaeee35dfbad1d26b5c44e5e4c1f7 in lucene-solr's branch 
refs/heads/branch_5_4 from [~mikemccand]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3100f1b ]

LUCENE-6932: improve exception messages; rename length parameter to 
sliceLength, and return it as the length, for clarity


> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6976) BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141098#comment-15141098
 ] 

ASF subversion and git services commented on LUCENE-6976:
-

Commit 7d52c2523c7a4ff70612742b76b934a12b493331 in lucene-solr's branch 
refs/heads/branch_5_4 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d52c25 ]

LUCENE-6976 SOLR-8541: BytesTermAttributeImpl.copyTo could NPE.
Could be triggered by trying to highlight a spatial RPT field.

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1724877 
13f79535-47bb-0310-9956-ffa450edef68

Conflicts:
lucene/CHANGES.txt
solr/CHANGES.txt


> BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null
> 
>
> Key: LUCENE-6976
> URL: https://issues.apache.org/jira/browse/LUCENE-6976
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.5, 5.4.2
>
> Attachments: LUCENE_6976.patch, LUCENE_6976.patch
>
>
> The BytesTermAttributeImpl class, not used much I think, has a problem in its 
> copyTo method in which it assumes "bytes" isn't null since it calls 
> BytesRef.deepCopyOf on it.  Perhaps deepCopyOf should support null?  And 
> also, toString(), equals() and hashCode() aren't implemented but we can do so.
> This was discovered in SOLR-8541; the spatial PrefixTreeStrategy uses this 
> attribute and the CachingTokenFilter when used on the analysis chain will 
> call clearAttributes() in it's end() method and then capture the state so it 
> can be replayed later.  BytesTermAttributeImpl.clear() nulls out the bytes 
> reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141090#comment-15141090
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit 9041c1cfe3a7162b77ba2aeb8ba58985ec167528 in lucene-solr's branch 
refs/heads/branch_5_4 from [~mikemccand]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9041c1c ]

LUCENE-6932: RAMInputStream now throws EOFException if you seek beyond the end 
of the file

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1726056 
13f79535-47bb-0310-9956-ffa450edef68

Conflicts:
lucene/CHANGES.txt


> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141089#comment-15141089
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit ad2c18cd72751cd80c11b4916980fb510eaf8f9f in lucene-solr's branch 
refs/heads/branch_5_4 from [~mikemccand]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ad2c18c ]

LUCENE-6932: RAMDirectory's IndexInput should always throw EOFE if you seek 
beyond the end of the file and then try to read

git-svn-id: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x@1725112 
13f79535-47bb-0310-9956-ffa450edef68

Conflicts:
lucene/CHANGES.txt


> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: master
>Reporter: Stéphane Campinas
> Fix For: master, 5.x, 5.4.2
>
> Attachments: LUCENE-6932.patch, LUCENE-6932.patch, LUCENE-6932.patch, 
> issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7002) MultiCollector throws NPE when there is CollectTerminatedException is thrown by a subcollector

2016-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141096#comment-15141096
 ] 

ASF subversion and git services commented on LUCENE-7002:
-

Commit 96624a676f5f2bfe3f267e6c1db889e2fe7a1781 in lucene-solr's branch 
refs/heads/branch_5_4 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=96624a6 ]

LUCENE-7002: Fixed MultiCollector to not throw a NPE if setScorer is called 
after one of the sub collectors is done collecting.

Conflicts:
lucene/CHANGES.txt


> MultiCollector throws NPE when there is CollectTerminatedException is thrown 
> by a subcollector
> --
>
> Key: LUCENE-7002
> URL: https://issues.apache.org/jira/browse/LUCENE-7002
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.4
>Reporter: John Wang
>Assignee: Adrien Grand
> Fix For: 5.5, 5.4.2
>
> Attachments: LUCENE-7002.patch
>
>
> I am seeing this in our log:
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.lucene.search.MultiCollector$MultiLeafCollector.setScorer(MultiCollector.java:156)
> at 
> org.apache.lucene.search.BooleanScorer$1$1.setScorer(BooleanScorer.java:50)
> at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:166)
> at 
> org.apache.lucene.search.BooleanScorer$1.score(BooleanScorer.java:59)
> at 
> org.apache.lucene.search.BooleanScorer$BulkScorerAndDoc.score(BooleanScorer.java:90)
> at 
> org.apache.lucene.search.BooleanScorer.scoreWindowSingleScorer(BooleanScorer.java:313)
> at 
> org.apache.lucene.search.BooleanScorer.scoreWindow(BooleanScorer.java:336)
> at 
> org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:364)
> at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:821)
> at 
> org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:763)
> at 
> org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:760)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {noformat}
> Looks like 
> {noformat}
> MultiCollector.removeCollector(i)
> {noformat} 
> is called on line 176, the loop:
> {noformat}
> for (LeafCollector c : collectors) {
> c.setScorer(scorer);
> }
> {noformat}
> in setScorer can still step on it, on line 155.
> I am however, unable to reproduce that with a unit test.
> I made a copy of this class and added a null check in setScorer() and the 
> problem goes away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7020) TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit

2016-02-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141072#comment-15141072
 ] 

Shawn Heisey commented on LUCENE-7020:
--

On the issue of file descriptors ... does Lucene open segment files a second 
time for a merge?  The files should already be opened so that queries will 
work.  I would hope there would not be any additional file descriptors required 
beyond the files for the new segment that is being built.

> TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit
> 
>
> Key: LUCENE-7020
> URL: https://issues.apache.org/jira/browse/LUCENE-7020
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.4.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: LUCENE-7020.patch
>
>
> SOLR-8621 covers improvements in configuring a merge policy in Solr.
> Discussions on that issue brought up the fact that if large values are 
> configured for maxMergeAtOnce and segmentsPerTier, but maxMergeAtOnceExplicit 
> is not changed, then doing a forceMerge is likely to not work as expected.
> When I first configured maxMergeAtOnce and segmentsPerTier to 35 in Solr, I 
> saw an optimize (forceMerge) fully rewrite most of the index *twice* in order 
> to achieve a single segment, because there were approximately 80 segments in 
> the index before the optimize, and maxMergeAtOnceExplicit defaults to 30.  On 
> advice given via the solr-user mailing list, I configured 
> maxMergeAtOnceExplicit to 105 and have not had that problem since.
> I propose that setting maxMergeAtOnce should also set maxMergeAtOnceExplicit 
> to three times the new value -- unless the setMaxMergeAtOnceExplicit method 
> has been invoked, indicating that the user wishes to set that value 
> themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7020) TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit

2016-02-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141065#comment-15141065
 ] 

Shai Erera commented on LUCENE-7020:


Sure, that's expected behavior and is noted as cascaded merges. The reason for 
these two settings is to better control system resources. And again, if you had 
a 150-segments index, would you change the setting to 150? I think that if you 
run forceMerge(1), you should expect few rounds of merges. Unless you feel 
comfortable with merging 150 segments at once.

But, I don't think this is a global setting and relation that we should set 
between these two settings.

> TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit
> 
>
> Key: LUCENE-7020
> URL: https://issues.apache.org/jira/browse/LUCENE-7020
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.4.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: LUCENE-7020.patch
>
>
> SOLR-8621 covers improvements in configuring a merge policy in Solr.
> Discussions on that issue brought up the fact that if large values are 
> configured for maxMergeAtOnce and segmentsPerTier, but maxMergeAtOnceExplicit 
> is not changed, then doing a forceMerge is likely to not work as expected.
> When I first configured maxMergeAtOnce and segmentsPerTier to 35 in Solr, I 
> saw an optimize (forceMerge) fully rewrite most of the index *twice* in order 
> to achieve a single segment, because there were approximately 80 segments in 
> the index before the optimize, and maxMergeAtOnceExplicit defaults to 30.  On 
> advice given via the solr-user mailing list, I configured 
> maxMergeAtOnceExplicit to 105 and have not had that problem since.
> I propose that setting maxMergeAtOnce should also set maxMergeAtOnceExplicit 
> to three times the new value -- unless the setMaxMergeAtOnceExplicit method 
> has been invoked, indicating that the user wishes to set that value 
> themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7020) TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit

2016-02-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141049#comment-15141049
 ] 

Shawn Heisey edited comment on LUCENE-7020 at 2/10/16 4:02 PM:
---

I have no benchmark data, only personal experience, which was a number of years 
ago:

With the two main settings for TMP at 35 (and no explicit setting), I saw the 
total number of segments during and after a full reindex hovering between 70 
and 100.  An optimize on an index like this turned out to be a two phase 
process -- creating a handful of very large segments and a few tiny segments, 
then a second pass where those segments were merged down to a single segment.  
After bumping maxMergeAtOnceExplicit to 105, an optimize completed in half the 
time and only did a single merge.



was (Author: elyograg):
I have no benchmark data, only personal experience, which was a number of years 
ago.  I only have this personal experience:

With the two main settings for TMP at 35 (and no explicit setting), I saw the 
total number of segments during and after a full reindex hovering between 70 
and 100.  An optimize on an index like this turned out to be a two phase 
process -- creating a handful of very large segments and a few tiny segments, 
then a second pass where those segments were merged down to a single segment.  
After bumping maxMergeAtOnceExplicit to 105, an optimize completed in half the 
time and only did a single merge.


> TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit
> 
>
> Key: LUCENE-7020
> URL: https://issues.apache.org/jira/browse/LUCENE-7020
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.4.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: LUCENE-7020.patch
>
>
> SOLR-8621 covers improvements in configuring a merge policy in Solr.
> Discussions on that issue brought up the fact that if large values are 
> configured for maxMergeAtOnce and segmentsPerTier, but maxMergeAtOnceExplicit 
> is not changed, then doing a forceMerge is likely to not work as expected.
> When I first configured maxMergeAtOnce and segmentsPerTier to 35 in Solr, I 
> saw an optimize (forceMerge) fully rewrite most of the index *twice* in order 
> to achieve a single segment, because there were approximately 80 segments in 
> the index before the optimize, and maxMergeAtOnceExplicit defaults to 30.  On 
> advice given via the solr-user mailing list, I configured 
> maxMergeAtOnceExplicit to 105 and have not had that problem since.
> I propose that setting maxMergeAtOnce should also set maxMergeAtOnceExplicit 
> to three times the new value -- unless the setMaxMergeAtOnceExplicit method 
> has been invoked, indicating that the user wishes to set that value 
> themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7020) TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit

2016-02-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141049#comment-15141049
 ] 

Shawn Heisey commented on LUCENE-7020:
--

I have no benchmark data, only personal experience, which was a number of years 
ago.  I only have this personal experience:

With the two main settings for TMP at 35 (and no explicit setting), I saw the 
total number of segments during and after a full reindex hovering between 70 
and 100.  An optimize on an index like this turned out to be a two phase 
process -- creating a handful of very large segments and a few tiny segments, 
then a second pass where those segments were merged down to a single segment.  
After bumping maxMergeAtOnceExplicit to 105, an optimize completed in half the 
time and only did a single merge.


> TieredMergePolicy - cascade maxMergeAtOnce setting to maxMergeAtOnceExplicit
> 
>
> Key: LUCENE-7020
> URL: https://issues.apache.org/jira/browse/LUCENE-7020
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.4.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: LUCENE-7020.patch
>
>
> SOLR-8621 covers improvements in configuring a merge policy in Solr.
> Discussions on that issue brought up the fact that if large values are 
> configured for maxMergeAtOnce and segmentsPerTier, but maxMergeAtOnceExplicit 
> is not changed, then doing a forceMerge is likely to not work as expected.
> When I first configured maxMergeAtOnce and segmentsPerTier to 35 in Solr, I 
> saw an optimize (forceMerge) fully rewrite most of the index *twice* in order 
> to achieve a single segment, because there were approximately 80 segments in 
> the index before the optimize, and maxMergeAtOnceExplicit defaults to 30.  On 
> advice given via the solr-user mailing list, I configured 
> maxMergeAtOnceExplicit to 105 and have not had that problem since.
> I propose that setting maxMergeAtOnce should also set maxMergeAtOnceExplicit 
> to three times the new value -- unless the setMaxMergeAtOnceExplicit method 
> has been invoked, indicating that the user wishes to set that value 
> themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3965 - Failure

2016-02-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3965/

1 tests failed.
FAILED:  org.apache.solr.index.hdfs.CheckHdfsIndexTest.doTest

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 
__randomizedtesting.SeedInfo.seed([23E51335C201027E:84A1AB91AFBA11C7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1084)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:482)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1504)
at 
org.apache.solr.index.hdfs.CheckHdfsIndexTest.doTest(CheckHdfsIndexTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMark

[jira] [Reopened] (LUCENE-6976) BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null

2016-02-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened LUCENE-6976:


Reopen for backport to 5.4.2.

> BytesTermAttributeImpl.copyTo NPEs when the BytesRef is null
> 
>
> Key: LUCENE-6976
> URL: https://issues.apache.org/jira/browse/LUCENE-6976
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.5, 5.4.2
>
> Attachments: LUCENE_6976.patch, LUCENE_6976.patch
>
>
> The BytesTermAttributeImpl class, not used much I think, has a problem in its 
> copyTo method in which it assumes "bytes" isn't null since it calls 
> BytesRef.deepCopyOf on it.  Perhaps deepCopyOf should support null?  And 
> also, toString(), equals() and hashCode() aren't implemented but we can do so.
> This was discovered in SOLR-8541; the spatial PrefixTreeStrategy uses this 
> attribute and the CachingTokenFilter when used on the analysis chain will 
> call clearAttributes() in it's end() method and then capture the state so it 
> can be replayed later.  BytesTermAttributeImpl.clear() nulls out the bytes 
> reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >