[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-04 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800658#comment-15800658
 ] 

Cao Manh Dat edited comment on SOLR-9922 at 1/5/17 7:55 AM:


Updated patch
- fix some bugs, all the test passed. 
- hdfsUpdateLog store buffered updates in hdfs
- buffered updates won't be replay in case of node die upon buffering, update 
log will know that an old buffered updates exist, and will delete old buffer 
update tlog when 
-- ensureLog() is called
-- new buffer tlog is created

[~markrmil...@gmail.com] There are only one thing that I concern is back-ward 
compatibly, the patch delete FLAG_GAP and related things. But I think when user 
update the Solr, their node must be in active state, so the the new version 
won't have to read old tlog.
[~erickerickson] The CDCR tests passed, But I'm not sure the changes on 
CDCRUpdateLog are correct. So can you take a look at the patch?


was (Author: caomanhdat):
Updated patch
- fix some bugs, all the test passed. 
- hdfsUpdateLog store buffered updates in hdfs
- buffered updates won't be replay in case of node die upon buffering, update 
log will know that an old buffered updates exist, and will delete old buffer 
update tlog when 
-- ensureLog() is called
-- new buffer tlog is created

[~erickerickson] The CDCR tests passed, But I'm not sure the changes on 
CDCRUpdateLog are correct. So can you take a look at the patch?

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9922) Write buffering updates to another tlog

2017-01-04 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9922:
---
Attachment: SOLR-9922.patch

Updated patch
- fix some bugs, all the test passed. 
- hdfsUpdateLog store buffered updates in hdfs
- buffered updates won't be replay in case of node die upon buffering, update 
log will know that an old buffered updates exist, and will delete old buffer 
update tlog when 
-- ensureLog() is called
-- new buffer tlog is created

[~erickerickson] The CDCR tests passed, But I'm not sure the changes on 
CDCRUpdateLog are correct. So can you take a look at the patch?

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1203 - Still Unstable

2017-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1203/

9 tests failed.
FAILED:  
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.testCollectionReload

Error Message:
Could not find collection : reloaded_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
reloaded_collection
at 
__randomizedtesting.SeedInfo.seed([C7F0D81851084BA6:C649F4268487E439]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:245)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.collectStartTimes(CollectionsAPIDistributedZkTest.java:589)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionReload(CollectionsAPIDistributedZkTest.java:530)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18701 - Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18701/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=6085, name=jetty-launcher-975-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)   
 2) Thread[id=6087, name=jetty-launcher-975-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=6085, name=jetty-launcher-975-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 597 - Failure!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/597/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=788, name=jetty-launcher-153-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)   
 2) Thread[id=789, name=jetty-launcher-153-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=788, name=jetty-launcher-153-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1057 - Still Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1057/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=3590, name=jetty-launcher-581-thread-1-SendThread(127.0.0.1:47358), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
2) Thread[id=3591, name=jetty-launcher-581-thread-1-EventThread, state=WAITING, 
group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=3590, 
name=jetty-launcher-581-thread-1-SendThread(127.0.0.1:47358), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
   2) Thread[id=3591, name=jetty-launcher-581-thread-1-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
at __randomizedtesting.SeedInfo.seed([3CA79A4EDCCE81E0]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=3590, name=jetty-launcher-581-thread-1-SendThread(127.0.0.1:47358), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=3590, 
name=jetty-launcher-581-thread-1-SendThread(127.0.0.1:47358), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
at __randomizedtesting.SeedInfo.seed([3CA79A4EDCCE81E0]:0)


FAILED:  org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics

Error Message:
minorMerge: 3 expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: minorMerge: 3 expected:<4> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([3CA79A4EDCCE81E0:F077A7F21C407ADB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics(SolrIndexMetricsTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 

[jira] [Commented] (SOLR-9917) NPE in JSON facet merger

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800320#comment-15800320
 ] 

ASF subversion and git services commented on SOLR-9917:
---

Commit 7ef6a8184682e046a6a8ce9166978c320d285a1a in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7ef6a81 ]

SOLR-9917: fix NPE in distrib percentiles when no values for field in bucket


> NPE in JSON facet merger
> 
>
> Key: SOLR-9917
> URL: https://issues.apache.org/jira/browse/SOLR-9917
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Markus Jelsma
>Assignee: Yonik Seeley
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9917.patch, SOLR-9917.patch
>
>
> I've spotted this before, and just now one of my unit tests does it as well. 
> I believe this happens when there is no data for the requested field.
> {code}
> java.lang.NullPointerException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:396)
> at 
> org.apache.solr.search.facet.PercentileAgg$Merger.merge(PercentileAgg.java:195)
> at 
> org.apache.solr.search.facet.FacetBucket.mergeBucket(FacetBucket.java:90)
> at 
> org.apache.solr.search.facet.FacetRequestSortedMerger.mergeBucketList(FacetRequestSortedMerger.java:61)
> at 
> org.apache.solr.search.facet.FacetRangeMerger.mergeBucketList(FacetRangeMerger.java:27)
> at 
> org.apache.solr.search.facet.FacetRangeMerger.merge(FacetRangeMerger.java:91)
> at 
> org.apache.solr.search.facet.FacetRangeMerger.merge(FacetRangeMerger.java:43)
> at 
> org.apache.solr.search.facet.FacetBucket.mergeBucket(FacetBucket.java:90)
> at 
> org.apache.solr.search.facet.FacetQueryMerger.merge(FacetModule.java:444)
> at 
> org.apache.solr.search.facet.FacetModule.handleResponses(FacetModule.java:272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4 release

2017-01-04 Thread Shalin Shekhar Mangar
Aggregated metrics will not be in 6.4 -- that work is still in progress.

On Thu, Jan 5, 2017 at 6:57 AM, S G  wrote:
> +1 for adding the metric related changes.
> Aggregated metrics from from replicas sounds like a very nice thing to have.
>
> On Wed, Jan 4, 2017 at 12:11 PM, Varun Thacker  wrote:
>>
>> +1 to cut a release branch on monday. Lots of goodies in this release!
>>
>> On Tue, Jan 3, 2017 at 8:23 AM, jim ferenczi 
>> wrote:
>>>
>>> Hi,
>>> I would like to volunteer to release 6.4. I can cut the release branch
>>> next Monday if everybody agrees.
>>>
>>> Jim
>>
>>
>



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 622 - Failure!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/622/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 67619 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1666770269
 [ecj-lint] Compiling 1014 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1666770269
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 

[jira] [Updated] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-04 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9928:

Attachment: SOLR-9928.patch

Discovered that my original patch was incomplete - need to unwrap one layer of 
directories here otherwise CachingDirectory fails to find in the cache.

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9908) create SolrCloudDIHWriter to speedup DataImportHandler on SolrCloud

2017-01-04 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800184#comment-15800184
 ] 

Alexandre Rafalovitch commented on SOLR-9908:
-

Would this still run in-process and work with Admin UI? I think there were 
proposals to run it out-of-process but then UI (and probably config) were 
getting messier.

> create SolrCloudDIHWriter to speedup DataImportHandler on SolrCloud
> ---
>
> Key: SOLR-9908
> URL: https://issues.apache.org/jira/browse/SOLR-9908
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
>
> Right now if DIH is invoked in SolrCloud it feeds docs one by one 
> synchronously via {{DistributedUpdateProcessor}}.  
> It's proposed to create {{DIHWriter}} implementation which will stream docs 
> with {{SolrCloudClient}}. I expect per-shard parallelism and even more with 
> {{CloudSolrClient.setParallelUpdates(true)}}.
> What's your feeling about it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-7495.
-
   Resolution: Fixed
Fix Version/s: 6.4

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> 

[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800158#comment-15800158
 ] 

ASF subversion and git services commented on SOLR-7495:
---

Commit f9e3554838f3a43742928d11a7dcd9a8409e0c97 in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9e3554 ]

SOLR-7495: Support Facet.field on a non-DocValued, single-value, int field


> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> 

[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800146#comment-15800146
 ] 

ASF subversion and git services commented on SOLR-7495:
---

Commit 6e10e36216db9861530b4407866e292a89927a9e in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e10e36 ]

SOLR-7495: Support Facet.field on a non-DocValued, single-value, int field


> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> 

[jira] [Assigned] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove reassigned SOLR-7495:
-

Assignee: Dennis Gove

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
>Assignee: Dennis Gove
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> 

[GitHub] lucene-solr pull request #:

2017-01-04 Thread xcl3721
Github user xcl3721 commented on the pull request:


https://github.com/apache/lucene-solr/commit/c9aaa771821eb0ca8dd6ce4650784d824fbe8c39#commitcomment-20367963
  
TrackingIndexWriter have been delete
do you have any suggetion for the NRT on 6.3 version 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 649 - Failure

2017-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/649/

All tests passed

Build Log:
[...truncated 67652 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1000755233
 [ecj-lint] Compiling 1014 source files to /tmp/ecj1000755233
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3756 - Still Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3756/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([CF72960E48DFA39C:4726A9D4E623CE64]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2592 - Failure!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2592/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 66007 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj180079
 [ecj-lint] Compiling 1014 source files to /tmp/ecj180079
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] 

[jira] [Comment Edited] (SOLR-9922) Write buffering updates to another tlog

2017-01-04 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799888#comment-15799888
 ] 

Cao Manh Dat edited comment on SOLR-9922 at 1/5/17 1:44 AM:


[~markrmil...@gmail.com] What do you think about the ideas that 
- "If a node die during buffering, when that node restart we should not replay 
buffering updates?" I run the cloud tests and didn't find any problems when we 
do that.
- Each time bufferUpdates() is called, we will discard previous buffer updates


was (Author: caomanhdat):
[~markrmil...@gmail.com] What do you think about the idea that "If a node die 
during buffering, when that node restart we should not replay buffering 
updates?" I run the cloud tests and didn't find any problems when we do that.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7495:
--
Attachment: SOLR-7495.patch

This patch applies cleanly to branch_6x.

I did see the test fail before applying the change to SimpleFacets and then saw 
it pass after applying the change to SimpleFacets. Going to run through all the 
tests before committing this change.

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch, SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> 

[jira] [Resolved] (SOLR-7431) stateFormat=2 is not respected when collections are recreated.

2017-01-04 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7431.
--
Resolution: Invalid

It's just invalid to expect Solr to recreate the Zookeeper state when you blow 
away all the data in Zookeeper, so closing. The stateFormat is not present. 
Well, actually there's not even clusterprops.json znode to read the property 
from.

Actually, thinking about it some more this is kind of cool behavior. I can blow 
away all of my ZK data and _still_ get my collections back. All I have to do is 
push the configsets back to ZK after starting all my Solr nodes. The fact that 
clusterstate.json is where the collections is easily remedied by 
MIGRATESTATEFORMAT.

> stateFormat=2 is not respected when collections are recreated.
> --
>
> Key: SOLR-7431
> URL: https://issues.apache.org/jira/browse/SOLR-7431
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1
>Reporter: Erick Erickson
>
> The following steps produce a clusterstate.json at the root of the Solr hive 
> in ZK rather than a state.json node under the collection.
> 0> Start separate ZK locally
> 1> create a collection with stock 5.x
> 2> shut down all solr nodes and Zookeeper
> 3> rm -rf /tmp/zookeeper
> 4> start ZK
> 5> upconfig your configuration
> 6> start the solr nodes.
> When I created the nodes originally, the collection had a state.json node 
> below it, but after the above there was only the uber-clusterstate.json node 
> and no state.json below the collection.
> Even in this state, new collections have the state.json as a child of the 
> collection.
> I'm not quite sure what happens if you do something less drastic than blow 
> the entire ZK configuration away. Say you remove just one replica's 
> information from the ZK state, it's not clear to me where it gets re-added.
> Of course this is not something that's very common, I ran in to it trying to 
> hack an upgrade from 4.x to 5.x and migrate from the single clusterstate.json 
> to state-per-collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Installing PyLucene

2017-01-04 Thread Andi Vajda

> On Jan 4, 2017, at 13:51, marco turchi  wrote:
> 
> No I didn't.
> 
> I have run the codes in sample and they work. For my project the
> functionalities in the samples are enough. If necessary I can recompile jcc
> with --shared. What do you suggest?

If you don't use --shared then the jcc that is linked into PyLucene is not 
running shared mode and the test failure you're seeing is due to that.

It's easy enough to rebuild PyLucene with --shared.
Up to you !

Andi..

> 
> Best
> Marco
> 
> 
> Il 04 Gen 2017 19:42, "Andi Vajda"  ha scritto:
> 
> 
>> On Jan 4, 2017, at 04:24, marco turchi  wrote:
>> 
>> Dear Andi and Thomas,
>> following your advice I have removed the Windows error.
>> 
>> I still have this
>> 
>> ERROR: testThroughLayerException (__main__.PythonExceptionTestCase)
>> 
>> To answer Andi, I have printed the config.SHARED just before the error and
>> the output is true, in my opinion, showing that the shared mode is enabled
>> when running tests. Is this that you were mentioning in your email?
> 
> When you built PyLucene did you include --shared on the jcc invocation
> command line ?
> 
> Andi..
> 
>> 
>> Thanks a lot for your help!
>> Marco
>> 
>> 
>> On Wed, Jan 4, 2017 at 10:59 AM, Petrus Hyvönen 
>> wrote:
>> 
>>> Dear Thomas,
>>> 
>>> I would be very interested in a python 3 port of JCC. I am not a very
>>> skilled developer, looked at starting a development based on the old
>>> python-3 version but it's beyond my current skills.
>>> 
>>> I would be happy to help and test and review the JCC patches, I think
> your
>>> patches would be a valuable contribution to JCC.
>>> 
>>> With Best Regards
>>> /Petrus
>>> 
>>> 
>>> On Wed, Jan 4, 2017 at 9:13 AM, Thomas Koch  wrote:
>>> 
> NameError: global name 'WindowsError' is not defined
 
 Note that PyLucene currently lacks official Python3 support!
 We've done a port of PyLucene 3.6 (!) to support Python3 and offered the
 patches needed to JCC and PyLucene for use/review on the list - but
>>> didn't
 get any feedback so far.
 cf. https://www.mail-archive.com/pylucene-dev@lucene.apache.
 org/msg02167.html 
 
 Regards,
 Thomas
 
>>> --
>>> _
>>> Petrus Hyvönen, Uppsala, Sweden
>>> Mobile Phone/SMS:+46 73 803 19 00
>>> 



[jira] [Issue Comment Deleted] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7495:
--
Comment: was deleted

(was: I applied the patch for just SimpleFacetsTest.java to branch_6x and ran 
the tests and it did not fail. Is this still an issue in 6x?)

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> 

Re: 6.4 release

2017-01-04 Thread S G
+1 for adding the metric related changes.
Aggregated metrics from from replicas sounds like a very nice thing to have.

On Wed, Jan 4, 2017 at 12:11 PM, Varun Thacker  wrote:

> +1 to cut a release branch on monday. Lots of goodies in this release!
>
> On Tue, Jan 3, 2017 at 8:23 AM, jim ferenczi 
> wrote:
>
>> Hi,
>> I would like to volunteer to release 6.4. I can cut the release branch
>> next Monday if everybody agrees.
>>
>> Jim
>>
>
>


[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2017-01-04 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799933#comment-15799933
 ] 

Dennis Gove commented on SOLR-7495:
---

I applied the patch for just SimpleFacetsTest.java to branch_6x and ran the 
tests and it did not fail. Is this still an issue in 6x?

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> 

[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2017-01-04 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799888#comment-15799888
 ] 

Cao Manh Dat commented on SOLR-9922:


[~markrmil...@gmail.com] What do you think about the idea that "If a node die 
during buffering, when that node restart we should not replay buffering 
updates?" I run the cloud tests and didn't find any problems when we do that.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-01-04 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799852#comment-15799852
 ] 

Mike Drob commented on SOLR-9836:
-

Never mind, the CoreAdminRequestStatusTest failures I saw were due to an 
environment issue on my end. I think everything is passing for me now.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+147) - Build # 18699 - Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18699/
Java: 64bit/jdk-9-ea+147 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSynced node did not become leader expected:https://127.0.0.1:42151/ipown/al/collection1]> but was:https://127.0.0.1:45471/ipown/al/collection1]>

Stack Trace:
java.lang.AssertionError: PeerSynced node did not become leader 
expected:https://127.0.0.1:42151/ipown/al/collection1]> 
but was:https://127.0.0.1:45471/ipown/al/collection1]>
at 
__randomizedtesting.SeedInfo.seed([1C27B93464235B84:947386EECADF367C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:157)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Closed] (SOLR-5616) Make grouping code use response builder needDocList

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-5616.
-
Resolution: Fixed

> Make grouping code use response builder needDocList
> ---
>
> Key: SOLR-5616
> URL: https://issues.apache.org/jira/browse/SOLR-5616
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-5616.patch, SOLR-5616.patch, SOLR-5616.patch
>
>
> Right now the grouping code does this to check if it needs to generate a 
> docList for grouped results:
> {code}
> if (rb.doHighlights || rb.isDebug() || params.getBool(MoreLikeThisParams.MLT, 
> false) ){
> ...
> }
> {code}
> this is ugly because any new component that needs a doclist, from grouped 
> results, will need to modify QueryComponent to add a check to this if. 
> Ideally this should just use the rb.isNeedDocList() flag...
> Coincidentally this boolean is really never used at for non-grouped results 
> it always gets generated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5616) Make grouping code use response builder needDocList

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-5616:
--
Fix Version/s: 6.4

> Make grouping code use response builder needDocList
> ---
>
> Key: SOLR-5616
> URL: https://issues.apache.org/jira/browse/SOLR-5616
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-5616.patch, SOLR-5616.patch, SOLR-5616.patch
>
>
> Right now the grouping code does this to check if it needs to generate a 
> docList for grouped results:
> {code}
> if (rb.doHighlights || rb.isDebug() || params.getBool(MoreLikeThisParams.MLT, 
> false) ){
> ...
> }
> {code}
> this is ugly because any new component that needs a doclist, from grouped 
> results, will need to modify QueryComponent to add a check to this if. 
> Ideally this should just use the rb.isNeedDocList() flag...
> Coincidentally this boolean is really never used at for non-grouped results 
> it always gets generated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-5616) Make grouping code use response builder needDocList

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove reopened SOLR-5616:
---
  Assignee: Dennis Gove  (was: Erick Erickson)

> Make grouping code use response builder needDocList
> ---
>
> Key: SOLR-5616
> URL: https://issues.apache.org/jira/browse/SOLR-5616
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>Assignee: Dennis Gove
> Fix For: 6.4
>
> Attachments: SOLR-5616.patch, SOLR-5616.patch, SOLR-5616.patch
>
>
> Right now the grouping code does this to check if it needs to generate a 
> docList for grouped results:
> {code}
> if (rb.doHighlights || rb.isDebug() || params.getBool(MoreLikeThisParams.MLT, 
> false) ){
> ...
> }
> {code}
> this is ugly because any new component that needs a doclist, from grouped 
> results, will need to modify QueryComponent to add a check to this if. 
> Ideally this should just use the rb.isNeedDocList() flag...
> Coincidentally this boolean is really never used at for non-grouped results 
> it always gets generated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5616) Make grouping code use response builder needDocList

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799792#comment-15799792
 ] 

ASF subversion and git services commented on SOLR-5616:
---

Commit b1ce385302d055e53e51f364d88482cf7e24ad6f in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b1ce385 ]

SOLR-5616: Simplifies grouping code to use ResponseBuilder.needDocList() to 
determine if it needs to generate a doc list for grouped results.


> Make grouping code use response builder needDocList
> ---
>
> Key: SOLR-5616
> URL: https://issues.apache.org/jira/browse/SOLR-5616
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>Assignee: Erick Erickson
> Fix For: 6.4
>
> Attachments: SOLR-5616.patch, SOLR-5616.patch, SOLR-5616.patch
>
>
> Right now the grouping code does this to check if it needs to generate a 
> docList for grouped results:
> {code}
> if (rb.doHighlights || rb.isDebug() || params.getBool(MoreLikeThisParams.MLT, 
> false) ){
> ...
> }
> {code}
> this is ugly because any new component that needs a doclist, from grouped 
> results, will need to modify QueryComponent to add a check to this if. 
> Ideally this should just use the rb.isNeedDocList() flag...
> Coincidentally this boolean is really never used at for non-grouped results 
> it always gets generated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-3990.
-
Resolution: Fixed

Committed into both master and branch_6x.

Thank you, Shawn!

> index size unavailable in gui/mbeans unless replication handler configured
> --
>
> Key: SOLR-3990
> URL: https://issues.apache.org/jira/browse/SOLR-3990
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-3990.patch, SOLR-3990.patch, SOLR-3990.patch
>
>
> Unless you configure the replication handler, the on-disk size of each core's 
> index seems to be unavailable in the gui or from the mbeans handler.  If you 
> are not doing replication, you should still be able to get the size of each 
> index without configuring things that won't be used.
> Also, I would like to get the size of the index in a consistent unit of 
> measurement, probably MB.  I understand the desire to give people a human 
> readable unit next to a number that's not enormous, but it's difficult to do 
> programmatic comparisons between values such as 787.33 MB and 23.56 GB.  That 
> may mean that the number needs to be available twice, one format to be shown 
> in the admin GUI and both formats available from the mbeans handler, for 
> scripting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799743#comment-15799743
 ] 

ASF subversion and git services commented on SOLR-3990:
---

Commit 973a48e3e7fddef2ee3a21864783423974d6c6b7 in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=973a48e ]

SOLR-3990: Moves getIndexSize() from ReplicationHandler to SolrCore


> index size unavailable in gui/mbeans unless replication handler configured
> --
>
> Key: SOLR-3990
> URL: https://issues.apache.org/jira/browse/SOLR-3990
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-3990.patch, SOLR-3990.patch, SOLR-3990.patch
>
>
> Unless you configure the replication handler, the on-disk size of each core's 
> index seems to be unavailable in the gui or from the mbeans handler.  If you 
> are not doing replication, you should still be able to get the size of each 
> index without configuring things that won't be used.
> Also, I would like to get the size of the index in a consistent unit of 
> measurement, probably MB.  I understand the desire to give people a human 
> readable unit next to a number that's not enormous, but it's difficult to do 
> programmatic comparisons between values such as 787.33 MB and 23.56 GB.  That 
> may mean that the number needs to be available twice, one format to be shown 
> in the admin GUI and both formats available from the mbeans handler, for 
> scripting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799707#comment-15799707
 ] 

ASF subversion and git services commented on SOLR-3990:
---

Commit bd39ae9c9d8500d92306478fb51ee6e19009cee9 in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd39ae9 ]

SOLR-3990: Moves getIndexSize() from ReplicationHandler to SolrCore


> index size unavailable in gui/mbeans unless replication handler configured
> --
>
> Key: SOLR-3990
> URL: https://issues.apache.org/jira/browse/SOLR-3990
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-3990.patch, SOLR-3990.patch, SOLR-3990.patch
>
>
> Unless you configure the replication handler, the on-disk size of each core's 
> index seems to be unavailable in the gui or from the mbeans handler.  If you 
> are not doing replication, you should still be able to get the size of each 
> index without configuring things that won't be used.
> Also, I would like to get the size of the index in a consistent unit of 
> measurement, probably MB.  I understand the desire to give people a human 
> readable unit next to a number that's not enormous, but it's difficult to do 
> programmatic comparisons between values such as 787.33 MB and 23.56 GB.  That 
> may mean that the number needs to be available twice, one format to be shown 
> in the admin GUI and both formats available from the mbeans handler, for 
> scripting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 666 - Failure

2017-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/666/

No tests ran.

Build Log:
[...truncated 41971 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (33.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.5 MB in 0.02 sec (1229.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 65.0 MB in 0.05 sec (1224.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.9 MB in 0.06 sec (1222.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6180 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6180 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (70.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 40.1 MB in 0.04 sec (1129.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 140.5 MB in 0.12 sec (1142.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 150.1 MB in 0.13 sec (1160.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=20428). Happy searching!
   

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6334 - Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6334/
Java: 64bit/jdk1.8.0_112 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSynced node did not become leader expected:http://127.0.0.1:56853/fhxg/ef/collection1]> but was:http://127.0.0.1:56841/fhxg/ef/collection1]>

Stack Trace:
java.lang.AssertionError: PeerSynced node did not become leader 
expected:http://127.0.0.1:56853/fhxg/ef/collection1]> 
but was:http://127.0.0.1:56841/fhxg/ef/collection1]>
at 
__randomizedtesting.SeedInfo.seed([A8553689F18A71C4:200109535F761C3C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7614) Allow single term "phrase" in complexphrase queryparser

2017-01-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799548#comment-15799548
 ] 

Michael McCandless commented on LUCENE-7614:


Maybe check for {{MultiTermQuery}} instead of separately for 
{{Wildcard,Prefix,TermRangeQuery}}?

Do you want to include {{PointRangeQuery}} too (not sure this QP ever parses 
that)?  It's not a {{MultiTermQuery}} so you'd need to also check for it, if so.

> Allow single term "phrase" in complexphrase queryparser 
> 
>
> Key: LUCENE-7614
> URL: https://issues.apache.org/jira/browse/LUCENE-7614
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Mikhail Khludnev
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7614.patch
>
>
> {quote}
> From  Otmar Caduff 
> Subject   ComplexPhraseQueryParser with wildcards
> Date  Tue, 20 Dec 2016 13:55:42 GMT
> Hi,
> I have an index with a single document with a field "field" and textual
> content "johnny peters" and I am using
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser to
> parse the query:
>field: (john* peter)
> When searching with this query, I am getting the document as expected.
> However with this query:
>field: ("john*" "peter")
> I am getting the following exception:
> Exception in thread "main" java.lang.IllegalArgumentException: Unknown
> query type "org.apache.lucene.search.PrefixQuery" found in phrase query
> string "john*"
> at
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:268)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-01-04 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799515#comment-15799515
 ] 

Mike Drob commented on SOLR-9836:
-

I'm running some tests, but getting failures from I think SOLR-9928. And also 
some failures in CoreAdminRequestStatusTest, but get those without my patch 
applied also. Will try to track all of these down.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7055) Better execution path for costly queries

2017-01-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799488#comment-15799488
 ] 

Michael McCandless commented on LUCENE-7055:


Thanks!  Looks like a few {{estimateCost}}'s missed the replacement, e.g. 
private method in {{SimpleTextBKDReader}}, {{CheckIndex}}, {{BKDReader}}.

{{ScorerSupplier}} is great.

Otherwise patch looks awesome, thanks [~jpountz]!


> Better execution path for costly queries
> 
>
> Key: LUCENE-7055
> URL: https://issues.apache.org/jira/browse/LUCENE-7055
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Attachments: LUCENE-7055.patch, LUCENE-7055.patch, LUCENE-7055.patch
>
>
> In Lucene 5.0, we improved the execution path for queries that run costly 
> operations on a per-document basis, like phrase queries or doc values 
> queries. But we have another class of costly queries, that return fine 
> iterators, but these iterators are very expensive to build. This is typically 
> the case for queries that leverage DocIdSetBuilder, like TermsQuery, 
> multi-term queries or the new point queries. Intersecting such queries with a 
> selective query is very inefficient since these queries build a doc id set of 
> matching documents for the entire index.
> Is there something we could do to improve the execution path for these 
> queries?
> One idea that comes to mind is that most of these queries could also run on 
> doc values, so maybe we could come up with something that would help decide 
> how to run a query based on other parts of the query? (Just thinking out 
> loud, other ideas are very welcome)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-01-04 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9836:

Attachment: SOLR-9836.patch

Yep, you're right, the delete snuck in. Thanks for looking.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-04 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9928:

Attachment: SOLR-9928.patch

[~ab] - can you take a look, since this is mostly a follow on to SOLR-9854?

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mike Drob
> Attachments: SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799360#comment-15799360
 ] 

Erick Erickson commented on SOLR-6246:
--

If you can stand to wait a bit, Solr 6.4 may be cut in the next couple of 
weeks. Otherwise the only workaround I've seen is to have a separate core be 
your suggester that you don't reload.

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-04 Thread Mike Drob (JIRA)
Mike Drob created SOLR-9928:
---

 Summary: MetricsDirectoryFactory::renameWithOverwrite incorrectly 
calls super
 Key: SOLR-9928
 URL: https://issues.apache.org/jira/browse/SOLR-9928
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mike Drob


MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9885) Make log management configurable

2017-01-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799353#comment-15799353
 ] 

Jan Høydahl commented on SOLR-9885:
---

Looks good to me. Have you tested for both Linux and Windows?

> Make log management configurable
> 
>
> Key: SOLR-9885
> URL: https://issues.apache.org/jira/browse/SOLR-9885
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.3
>Reporter: Mano Kovacs
> Attachments: SOLR-9885.patch, SOLR-9885.patch, SOLR-9885.patch
>
>
> There is log rotation and log archiver in solr starter script, which is 
> failing if solr is deployed with custom log configuration (different log 
> filename). Also inconvenient if using custom log rotation/management.
> https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L1464
> Proposing an environment setting, something like 
> {{SOLR_LOG_PRESTART_ROTATION}} (with default {{true}}), that makes the 
> execution of the four lines configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9917) NPE in JSON facet merger

2017-01-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9917:
---
Attachment: SOLR-9917.patch

OK, here's a patch + test.


> NPE in JSON facet merger
> 
>
> Key: SOLR-9917
> URL: https://issues.apache.org/jira/browse/SOLR-9917
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Markus Jelsma
>Assignee: Yonik Seeley
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9917.patch, SOLR-9917.patch
>
>
> I've spotted this before, and just now one of my unit tests does it as well. 
> I believe this happens when there is no data for the requested field.
> {code}
> java.lang.NullPointerException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:396)
> at 
> org.apache.solr.search.facet.PercentileAgg$Merger.merge(PercentileAgg.java:195)
> at 
> org.apache.solr.search.facet.FacetBucket.mergeBucket(FacetBucket.java:90)
> at 
> org.apache.solr.search.facet.FacetRequestSortedMerger.mergeBucketList(FacetRequestSortedMerger.java:61)
> at 
> org.apache.solr.search.facet.FacetRangeMerger.mergeBucketList(FacetRangeMerger.java:27)
> at 
> org.apache.solr.search.facet.FacetRangeMerger.merge(FacetRangeMerger.java:91)
> at 
> org.apache.solr.search.facet.FacetRangeMerger.merge(FacetRangeMerger.java:43)
> at 
> org.apache.solr.search.facet.FacetBucket.mergeBucket(FacetBucket.java:90)
> at 
> org.apache.solr.search.facet.FacetQueryMerger.merge(FacetModule.java:444)
> at 
> org.apache.solr.search.facet.FacetModule.handleResponses(FacetModule.java:272)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [5.5.x] Highlighter and rewriting MultiPhraseQuery

2017-01-04 Thread Thomas Kappler
Hi David,

Thanks for your quick help! We ended up making our custom query extend Query 
and wrap a MultiPhraseQuery instance instead of extending MPQ. This way, the 
right branch is taken in WeightedSpanTermExtractor.

Just wanted to bring this to a conclusion and make the solution searchable for 
others.

Thanks,
Thomas


From: David Smiley [mailto:david.w.smi...@gmail.com]
Sent: Wednesday, December 21, 2016 10:40 AM
To: dev@lucene.apache.org
Cc: Nate Ko 
Subject: Re: [5.5.x] Highlighter and rewriting MultiPhraseQuery

Hi Thomas,

This is a constant source of maintenance in Lucene -- updating all of our 
highlighters to be aware of new queries.  Some of them require more maintenance 
than others; by far WSTE is the hotspot.  WSTE avoids calling 
query.rewrite(IndexReader) because for some queries it can be quite expensive.  
It doesn't really know for sure in the face of custom queries, if this is okay 
or not.  I recommend extending WSTE and add some instanceof checks for your 
custom queries so that you do whatever the right things is for your query.  If 
you use the UnifiedHighlighter, new in Lucene as of 6.3, there are several 
callback hooks provided for this sort of thing without exposing the guts of it, 
which uses WSTE.

BTW it's obvious to me that Query needs some sort of visitor API.  It's very 
much related to maintaining the highlighters with respect to new/different 
queries.  
https://issues.apache.org/jira/browse/LUCENE-3041

Good luck,

~ David

On Dec 21, 2016, at 1:27 PM, Thomas Kappler 
> wrote:

Hi,

We have implemented a custom query that extends MultiPhraseQuery (MPQ) because 
it uses MPQ’s getTermArrays() and getPositions(). We’d like to use this query 
for highlighting, but we’re facing the following issue.

In highlighter/WeightedSpanTermExtractor, the extract() method does a series of 
instanceof checks. There is a special case for MPQ. This branch does not call 
rewrite(IndexReader) on the query, but our custom query needs rewriting to work 
properly.

As a test, I commented the MPQ branch in WeightedSpanTermExtractor so the code 
takes the last else branch, where rewrite(IndexReader) is called, and our tests 
pass.

My questions are

  *   Are we doing it wrong when our query *needs* rewriting? Our query logic 
needs the IndexReader that’s passed in in rewrite(IndexReader). Where else 
would we put such code?
  *   If we aren’t doing it wrong, how can we use the highlighter? Extend Query 
instead of MPQ and copy the tracking of term arrays and positions from MPQ?

Thanks,
Thomas



[jira] [Assigned] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-01-04 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-9836:
-

Assignee: Mark Miller

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-01-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799285#comment-15799285
 ] 

Mark Miller commented on SOLR-9836:
---

writeNewIndexProps no longer has the correct impl - needs to be properly merged 
up.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread Christoph Froeschel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799242#comment-15799242
 ] 

Christoph Froeschel edited comment on SOLR-6246 at 1/4/17 8:34 PM:
---

Hello.

I've looked at Andreas' comment. The problem is that i need to push some 
software to production and for that I would not like to have a nightly build 
but a stable release. On my test machine I have 6.2.1 installed. That's why I 
was looking for a workaround.

Best Regards
Christoph


was (Author: christoph froeschel):
Hello.

I've looked at Andreas' comment. The problem is that i need to push some 
software to production and for that I would not like to have a nightly build 
but a stable release. On my test machine I have 6.3 installed. That's why I was 
looking for a workaround.

Best Regards
Christoph

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread Christoph Froeschel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799242#comment-15799242
 ] 

Christoph Froeschel commented on SOLR-6246:
---

Hello.

I've looked at Andreas' comment. The problem is that i need to push some 
software to production and for that I would not like to have a nightly build 
but a stable release. On my test machine I have 6.3 installed. That's why I was 
looking for a workaround.

Best Regards
Christoph

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7614) Allow single term "phrase" in complexphrase queryparser

2017-01-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-7614:
-
Attachment: LUCENE-7614.patch

Colleagues, can you have a look at a few rows in [^LUCENE-7614.patch]? I really 
want in shovel it in 6.4.  

> Allow single term "phrase" in complexphrase queryparser 
> 
>
> Key: LUCENE-7614
> URL: https://issues.apache.org/jira/browse/LUCENE-7614
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Mikhail Khludnev
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7614.patch
>
>
> {quote}
> From  Otmar Caduff 
> Subject   ComplexPhraseQueryParser with wildcards
> Date  Tue, 20 Dec 2016 13:55:42 GMT
> Hi,
> I have an index with a single document with a field "field" and textual
> content "johnny peters" and I am using
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser to
> parse the query:
>field: (john* peter)
> When searching with this query, I am getting the document as expected.
> However with this query:
>field: ("john*" "peter")
> I am getting the following exception:
> Exception in thread "main" java.lang.IllegalArgumentException: Unknown
> query type "org.apache.lucene.search.PrefixQuery" found in phrase query
> string "john*"
> at
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:268)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4 release

2017-01-04 Thread Varun Thacker
+1 to cut a release branch on monday. Lots of goodies in this release!

On Tue, Jan 3, 2017 at 8:23 AM, jim ferenczi  wrote:

> Hi,
> I would like to volunteer to release 6.4. I can cut the release branch
> next Monday if everybody agrees.
>
> Jim
>


[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_112) - Build # 2590 - Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2590/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testAddAndDeleteReplica

Error Message:
Error from server at https://127.0.0.1:46218/solr: Could not fully remove 
collection: solrj_replicatests

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:46218/solr: Could not fully remove collection: 
solrj_replicatests
at 
__randomizedtesting.SeedInfo.seed([3176EEA35D335595:DFC697F65AFF422F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:610)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testAddAndDeleteReplica(CollectionsAPISolrJTest.java:210)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates

2017-01-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799165#comment-15799165
 ] 

Yonik Seeley commented on SOLR-8030:


bq. I have no idea how controversial ripping this mode of the updateLog out of 
Solr would be.

I've never liked the idea of putting custom processors after the 
DistributedURP.  To me, the latter was mostly an implementation detail.

bq.  I propose that the UpdateLog only get written to by DirectUpdateHandler, 
and thus we can better reason about it. 

I'd put it slightly differently... the user should not care about that 
implementation detail (if the tlog is written in the processor or the handler, 
or some combination), and thus we should be free to do either.


> Transaction log does not store the update chain (or req params?) used for 
> updates
> -
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain, or any other details from 
> the original update request such as the request params, used during updates.
> Therefore tLog uses the default update chain, and a synthetic request, during 
> log replay.
> If we implement custom update logic with multiple distinct update chains that 
> use custom processors after DistributedUpdateProcessor, or if the default 
> chain uses processors whose behavior depends on other request params, then 
> log replay may be incorrect.
> Potentially problematic scenerios (need test cases):
> * DBQ where the main query string uses local param variables that refer to 
> other request params
> * custom Update chain set as {{default="true"}} using something like 
> StatelessScriptUpdateProcessorFactory after DUP where the script depends on 
> request params.
> * multiple named update chains with diff processors configured after DUP and 
> specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom 
> formats configured after DUP in some special chains, but not in the default 
> chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7466) Allow optional leading wildcards in complexphrase

2017-01-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-7466.

   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> Allow optional leading wildcards in complexphrase
> -
>
> Key: SOLR-7466
> URL: https://issues.apache.org/jira/browse/SOLR-7466
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Andy hardin
>Assignee: Mikhail Khludnev
>  Labels: complexPhrase, query-parser, wildcards
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-7466.patch, SOLR-7466.patch
>
>
> Currently ComplexPhraseQParser (SOLR-1604) allows trailing wildcards on terms 
> in a phrase, but does not allow leading wildcards.  I would like the option 
> to be able to search for terms with both trailing and leading wildcards.  
> For example with:
> {!complexphrase allowLeadingWildcard=true} "j* *th"
> would match "John Smith", "Jim Smith", but not "John Schmitt"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9927) revert HttpPartitionTest's avoidance of directUpdatesToLeadersOnly

2017-01-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9927:
--
Attachment: SOLR-9927.patch

> revert HttpPartitionTest's avoidance of directUpdatesToLeadersOnly
> --
>
> Key: SOLR-9927
> URL: https://issues.apache.org/jira/browse/SOLR-9927
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9927.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9927) revert HttpPartitionTest's avoidance of directUpdatesToLeadersOnly

2017-01-04 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9927:
-

 Summary: revert HttpPartitionTest's avoidance of 
directUpdatesToLeadersOnly
 Key: SOLR-9927
 URL: https://issues.apache.org/jira/browse/SOLR-9927
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1056 - Still Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1056/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([31A2606C446B075C:B9F65FB6EA976AA4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates

2017-01-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799127#comment-15799127
 ] 

David Smiley commented on SOLR-8030:


You've read me correctly [~hossman] :-)  

I have no idea how controversial ripping this mode of the updateLog out of Solr 
would be.  To be clear, I propose that the UpdateLog only get written to by 
DirectUpdateHandler, and thus we can better reason about it.  I've been 
following [~caomanhdat] 's progress in SOLR-9835 and the complexities of 
PeerSync were too much to deal with so he totally punted on it for now.  Was 
that related to the fact that the updateLog can be in a buffering mode written 
to by DistributedURP?  Not sure.

> Transaction log does not store the update chain (or req params?) used for 
> updates
> -
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain, or any other details from 
> the original update request such as the request params, used during updates.
> Therefore tLog uses the default update chain, and a synthetic request, during 
> log replay.
> If we implement custom update logic with multiple distinct update chains that 
> use custom processors after DistributedUpdateProcessor, or if the default 
> chain uses processors whose behavior depends on other request params, then 
> log replay may be incorrect.
> Potentially problematic scenerios (need test cases):
> * DBQ where the main query string uses local param variables that refer to 
> other request params
> * custom Update chain set as {{default="true"}} using something like 
> StatelessScriptUpdateProcessorFactory after DUP where the script depends on 
> request params.
> * multiple named update chains with diff processors configured after DUP and 
> specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom 
> formats configured after DUP in some special chains, but not in the default 
> chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #134: [SOLR-9926] Add an environment variable to zkcli to ...

2017-01-04 Thread hgadre
Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/134
  
@markrmiller can you take a look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9926) Allow passing arbitrary java system properties to zkcli

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799104#comment-15799104
 ] 

ASF GitHub Bot commented on SOLR-9926:
--

Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/134
  
@markrmiller can you take a look?


> Allow passing arbitrary java system properties to zkcli
> ---
>
> Key: SOLR-9926
> URL: https://issues.apache.org/jira/browse/SOLR-9926
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> Currently zkcli does not allow passing arbitrary java system properties (e.g. 
> java.io.tmpdir). It only supports configuring ZK ACLs via 
> SOLR_ZK_CREDS_AND_ACLS environment variable. While we can overload 
> SOLR_ZK_CREDS_AND_ACLS with additional parameters, it seems unclean. This 
> jira is to add another environment variable to pass these parameters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9926) Allow passing arbitrary java system properties to zkcli

2017-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799099#comment-15799099
 ] 

ASF GitHub Bot commented on SOLR-9926:
--

GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/134

[SOLR-9926] Add an environment variable to zkcli to pass java system 
properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9926_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #134


commit a308ecf6a675dbd6e26f3b79ef26814bd1d02efe
Author: Hrishikesh Gadre 
Date:   2017-01-04T18:53:24Z

[SOLR-9926] Add an environment variable to zkcli to pass java system 
properties




> Allow passing arbitrary java system properties to zkcli
> ---
>
> Key: SOLR-9926
> URL: https://issues.apache.org/jira/browse/SOLR-9926
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> Currently zkcli does not allow passing arbitrary java system properties (e.g. 
> java.io.tmpdir). It only supports configuring ZK ACLs via 
> SOLR_ZK_CREDS_AND_ACLS environment variable. While we can overload 
> SOLR_ZK_CREDS_AND_ACLS with additional parameters, it seems unclean. This 
> jira is to add another environment variable to pass these parameters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #134: [SOLR-9926] Add an environment variable to zk...

2017-01-04 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/134

[SOLR-9926] Add an environment variable to zkcli to pass java system 
properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9926_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #134


commit a308ecf6a675dbd6e26f3b79ef26814bd1d02efe
Author: Hrishikesh Gadre 
Date:   2017-01-04T18:53:24Z

[SOLR-9926] Add an environment variable to zkcli to pass java system 
properties




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7610) Migrate facets module from ValueSource to Double/LongValuesSource

2017-01-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799077#comment-15799077
 ] 

Adrien Grand commented on LUCENE-7610:
--

+1 but I think we should remove the deprecated {{getQuery}} methods as they are 
not expected to be called by users of the facets module.

> Migrate facets module from ValueSource to Double/LongValuesSource
> -
>
> Key: LUCENE-7610
> URL: https://issues.apache.org/jira/browse/LUCENE-7610
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7610.patch
>
>
> Unfortunately this doesn't allow us to break the facets dependency on the 
> queries module, because facets also uses TermsQuery - perhaps this should 
> move to core as well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799064#comment-15799064
 ] 

Adrien Grand commented on LUCENE-7611:
--

+1 Maybe the constant helper should move to core? I imagine there will be more 
use-cases for it.

> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7609) Refactor expressions module to use DoubleValuesSource

2017-01-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799061#comment-15799061
 ] 

Adrien Grand commented on LUCENE-7609:
--

Also this change seems to be changing expression values to not have a value if 
any of the sub sources does not have a value while in the past it looks like 
all documents would have a value and we would treat sub sources that do not 
have a value as zero. I think we should either preserve this behaviour or 
document the new one?

> Refactor expressions module to use DoubleValuesSource
> -
>
> Key: LUCENE-7609
> URL: https://issues.apache.org/jira/browse/LUCENE-7609
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7609.patch
>
>
> With DoubleValuesSource in core, we can refactor the expressions module to 
> use these instead of ValueSource, and remove the dependency of expressions on 
> the queries module in master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8542) Integrate Learning to Rank into Solr

2017-01-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799033#comment-15799033
 ] 

Christine Poerschke edited comment on SOLR-8542 at 1/4/17 7:01 PM:
---

The Solr Reference Guide content for SOLR-8542 (Integrate Learning to Rank into 
Solr) is currently on the tentatively named 
https://cwiki.apache.org/confluence/display/solr/Result+Reranking page. 
Suggestions for alternative page names would be very welcome.

The "Result Reranking" page would be placed after the "Result Grouping" and 
"Result Clustering" pages, suggestions for alternative placements would be 
welcome also.

The following files currently mention the "Result Reranking" page:
* https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr/README.md
* 
https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr/example/README.md
* 
https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml


was (Author: cpoerschke):
The Solr Reference Guide content for SOLR-8542 (Integrate Learning to Rank into 
Solr) is currently on the tentatively named 
https://cwiki.apache.org/confluence/display/solr/Result+Reranking page. 
Suggestions for alternative page names would be very welcome.

The "Result Reranking" page would be placed after the "Result Grouping" and 
"Result Clustering" pages, suggestions for alternative placements would be 
welcome also.

The following files currently mention the "Result Reranking" page:
* https://github.com/apache/lucene-solr/solr/contrib/ltr/README.md
* https://github.com/apache/lucene-solr/solr/contrib/ltr/example/README.md
* 
https://github.com/apache/lucene-solr/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7609) Refactor expressions module to use DoubleValuesSource

2017-01-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799047#comment-15799047
 ] 

Adrien Grand commented on LUCENE-7609:
--

bq.  The dependency on queries is kept for the moment, and the ValueSource 
versions of methods deprecated; these will be removed in a subsequent patch.

Why didn't you remove them in this patch? This change is breaking anyway, so 
maybe we should make things clean right away? For instance, I'm looking at the 
deprecated {{SimpleBindings.getValueSource}} which is not expected to be called 
by users of the expression module, so maybe we should remove it now?

> Refactor expressions module to use DoubleValuesSource
> -
>
> Key: LUCENE-7609
> URL: https://issues.apache.org/jira/browse/LUCENE-7609
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7609.patch
>
>
> With DoubleValuesSource in core, we can refactor the expressions module to 
> use these instead of ValueSource, and remove the dependency of expressions on 
> the queries module in master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4 release

2017-01-04 Thread Christine Poerschke (BLOOMBERG/ LONDON)
I'd like to finalise the name of the Learning-To-Rank (SOLR-8542) Solr Ref 
Guide page before the first RC is cut.

Details:

The Solr Reference Guide content for SOLR-8542 (Integrate Learning to Rank into 
Solr) is currently on the tentatively named 
https://cwiki.apache.org/confluence/display/solr/Result+Reranking page. 
Suggestions for alternative page names would be very welcome.

The "Result Reranking" page would be placed after the "Result Grouping" and 
"Result Clustering" pages, suggestions for alternative placements would be 
welcome also.

The following files currently mention the "Result Reranking" page:
* https://github.com/apache/lucene-solr/solr/contrib/ltr/README.md
* https://github.com/apache/lucene-solr/solr/contrib/ltr/example/README.md
* 
https://github.com/apache/lucene-solr/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml

From: dev@lucene.apache.org At: 01/04/17 08:52:44
To: dev@lucene.apache.org
Subject: Re: 6.4 release

+1 to cut the 6.4 branch and build a RC next monday

Le mer. 4 janv. 2017 à 07:39, Andrzej Białecki 
 a écrit :


On 4 Jan 2017, at 05:00, Walter Underwood  wrote:
If the metrics changes get in, I will be very, very happy. We’ve been running 
with local hacks for four years.


Most of the key instrumentation is already in place. Shalin and I are working 
now on cloud-related metrics (transaction log metrics, aggregated metrics from 
replicas and from shard leaders). There’s one improvement that would be good to 
put into 6.4: SOLR-9911 Add a way to filter metrics by prefix in the 
MetricsHandler API. This would help a lot in optimizing the retrieval of 
selected metrics, from among the hundreds that we track now.

Andrzej


wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

  

On Jan 3, 2017, at 9:25 AM, Michael McCandless  
wrote:
+1, lots of good stuff for 6.4 already!  Thanks for volunteering Jim :)

Mike McCandless

http://blog.mikemccandless.com


On Tue, Jan 3, 2017 at 11:23 AM, jim ferenczi  wrote:

Hi,
I would like to volunteer to release 6.4. I can cut the release branch next
Monday if everybody agrees.

Jim


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2017-01-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799033#comment-15799033
 ] 

Christine Poerschke commented on SOLR-8542:
---

The Solr Reference Guide content for SOLR-8542 (Integrate Learning to Rank into 
Solr) is currently on the tentatively named 
https://cwiki.apache.org/confluence/display/solr/Result+Reranking page. 
Suggestions for alternative page names would be very welcome.

The "Result Reranking" page would be placed after the "Result Grouping" and 
"Result Clustering" pages, suggestions for alternative placements would be 
welcome also.

The following files currently mention the "Result Reranking" page:
* https://github.com/apache/lucene-solr/solr/contrib/ltr/README.md
* https://github.com/apache/lucene-solr/solr/contrib/ltr/example/README.md
* 
https://github.com/apache/lucene-solr/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15799024#comment-15799024
 ] 

ASF subversion and git services commented on SOLR-8542:
---

Commit 94dad5b68e5a46ea820514a43e3a759ef3c57716 in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=94dad5b ]

SOLR-8542: README and solr/contrib/ltr/example changes

details:
* reduced README in favour of equivalent Solr Ref Guide content and (new) 
example/README
* solr/contrib/ltr/example improvements and fixes

also:
* stop supporting '*' in Managed(Feature|Model)Store.doDeleteChild


> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-01-04 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9836:

Attachment: SOLR-9836.patch

Patch #5 - rebase based on changes from solr metrics.

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18697 - Unstable!

2017-01-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18697/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([8376DD186AA9128B:B22E2C2C4557F73]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

-ea assertions mismatch error when running JUnit test

2017-01-04 Thread Jennifer Coston
Hello,

I am trying to switch from using the SolrJ Embedded Server to the 
SolrJettyTestBase in my Junit tests. However, I am getting the following error:

java.lang.Exception: Assertions mismatch: -ea was not specified but 
-Dtests.asserts=true

at __randomizedtesting.SeedInfo.seed([5B25E606A72BD541]:0)

at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:47)

at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)

at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)

at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)

at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)

at java.lang.Thread.run(Thread.java:745)

I reached out to the Solr User list, and they said that I need to either add 
"-ea" to the java commandline or remove the system property. The problem is 
that I didn't specify the property and I'm not sure how to tell the project not 
to test the asserts besides adding System.setProperty("tests.asserts", 
"false"); which didn't work. I have been told to re-direct my question to you. 
I have created a simple project on github 
(https://github.com/jencodingatwork/SolrTestClass) so that you can see the 
issue. Can anyone help me?

Thank you!

Jennifer



[jira] [Commented] (LUCENE-7619) Add WordDelimiterGraphFilter

2017-01-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798952#comment-15798952
 ] 

Adrien Grand commented on LUCENE-7619:
--

Wow, I did not think WDF would ever be fixed to produce correct positions, this 
is very exciting!

> Add WordDelimiterGraphFilter
> 
>
> Key: LUCENE-7619
> URL: https://issues.apache.org/jira/browse/LUCENE-7619
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7619.patch, after.png, before.png
>
>
> Currently, {{WordDelimiterFilter}} doesn't try to set the {{posLen}} 
> attribute and so it creates graphs like this:
> !before.png!
> but with this patch (still a work in progress) it creates this graph instead:
> !after.png!
> This means (today) positional queries when using WDF at search time are 
> buggy, but since we fixed LUCENE-7603, with this change here you should be 
> able to use positional queries with WDGF.
> I'm also trying to produce holes properly (removes logic from the current WDF 
> that swallows a hole when whole token is just delimiters).
> Surprisingly, it's actually quite easy to tweak WDF to create a graph (unlike 
> e.g. {{SynonymGraphFilter}}) because it's already creating the necessary new 
> positions, and its output graph never has side paths, except for single 
> tokens that skip nodes because they have {{posLen > 1}}.  I.e. the only fix 
> to make, I think, is to set {{posLen}} properly.  And it really helps that it 
> does its own "new token buffering + sorting" already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9926) Allow passing arbitrary java system properties to zkcli

2017-01-04 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9926:
--

 Summary: Allow passing arbitrary java system properties to zkcli
 Key: SOLR-9926
 URL: https://issues.apache.org/jira/browse/SOLR-9926
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.3
Reporter: Hrishikesh Gadre
Priority: Minor


Currently zkcli does not allow passing arbitrary java system properties (e.g. 
java.io.tmpdir). It only supports configuring ZK ACLs via 
SOLR_ZK_CREDS_AND_ACLS environment variable. While we can overload 
SOLR_ZK_CREDS_AND_ACLS with additional parameters, it seems unclean. This jira 
is to add another environment variable to pass these parameters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798927#comment-15798927
 ] 

ASF subversion and git services commented on SOLR-8542:
---

Commit eb2a8ba2eec0841f03bbcf7807e602f7164a606e in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb2a8ba ]

SOLR-8542: README and solr/contrib/ltr/example changes

details:
* reduced README in favour of equivalent Solr Ref Guide content and (new) 
example/README
* solr/contrib/ltr/example improvements and fixes

also:
* stop supporting '*' in Managed(Feature|Model)Store.doDeleteChild


> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch, 
> SOLR-8542.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-04 Thread Tim Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798916#comment-15798916
 ] 

Tim Owen commented on SOLR-9918:


Fair points Koji - I have updated the patch with a bit more documentation. I've 
also added the example configuration in the Javadoc comment.

Probably the [Confluence 
page|https://cwiki.apache.org/confluence/display/solr/Update+Request+Processors#UpdateRequestProcessors-UpdateRequestProcessorFactories]
 is the best place to put that kind of guideline notes on which processors to 
choose for different situations.

In the particular case of the SignatureUpdateProcessor, that class will cause 
the new document to overwrite/replace any existing document, not skip it, which 
is why I didn't use it for our use-case.

> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798902#comment-15798902
 ] 

Steve Rowe commented on SOLR-9883:
--

Forgot to mention: with the attached patch, I was no longer able to reproduce 
the data corruption with the above method.

> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Attachments: SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9918) An UpdateRequestProcessor to skip duplicate inserts and ignore updates to missing docs

2017-01-04 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-9918:
---
Attachment: SOLR-9918.patch

> An UpdateRequestProcessor to skip duplicate inserts and ignore updates to 
> missing docs
> --
>
> Key: SOLR-9918
> URL: https://issues.apache.org/jira/browse/SOLR-9918
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Tim Owen
> Attachments: SOLR-9918.patch, SOLR-9918.patch
>
>
> This is an UpdateRequestProcessor and Factory that we have been using in 
> production, to handle 2 common cases that were awkward to achieve using the 
> existing update pipeline and current processor classes:
> * When inserting document(s), if some already exist then quietly skip the new 
> document inserts - do not churn the index by replacing the existing documents 
> and do not throw a noisy exception that breaks the batch of inserts. By 
> analogy with SQL, {{insert if not exists}}. In our use-case, multiple 
> application instances can (rarely) process the same input so it's easier for 
> us to de-dupe these at Solr insert time than to funnel them into a global 
> ordered queue first.
> * When applying AtomicUpdate documents, if a document being updated does not 
> exist, quietly do nothing - do not create a new partially-populated document 
> and do not throw a noisy exception about missing required fields. By analogy 
> with SQL, {{update where id = ..}}. Our use-case relies on this because we 
> apply updates optimistically and have best-effort knowledge about what 
> documents will exist, so it's easiest to skip the updates (in the same way a 
> Database would).
> I would have kept this in our own package hierarchy but it relies on some 
> package-scoped methods, and seems like it could be useful to others if they 
> choose to configure it. Some bits of the code were borrowed from 
> {{DocBasedVersionConstraintsProcessorFactory}}.
> Attached patch has unit tests to confirm the behaviour.
> This class can be used by configuring solrconfig.xml like so..
> {noformat}
>   
> 
>  class="org.apache.solr.update.processor.SkipExistingDocumentsProcessorFactory">
>   true
>   false 
> 
> 
> 
>   
> {noformat}
> and initParams defaults of
> {noformat}
>   skipexisting
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9925) Child documents missing from replicas during parallel delete+add

2017-01-04 Thread Dan Sirotzke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dan Sirotzke updated SOLR-9925:
---
Attachment: generate.py

generate.py - Test script for pushing data in parallel

> Child documents missing from replicas during parallel delete+add
> 
>
> Key: SOLR-9925
> URL: https://issues.apache.org/jira/browse/SOLR-9925
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.3
> Environment: Java 1.8 (OpenJDK) on both CentOS 6.7 and Ubuntu 16.04.1
>Reporter: Dan Sirotzke
> Attachments: generate.py, run.sh
>
>
> When pushing documents to Solr in parallel, doing a delete-by-query and then 
> add for the same set of IDs within each thread results in some of the 
> replicas missing some of the child documents.  All the parent documents are 
> successfully replicated.
> This appears to trigger some sort of race condition, since:
> * Documents are never missing from the leader.
> * Documents _might_ be missing from the replicas.
> * When they are missing, the number and which documents are different for 
> each replica and each run.
> * It happens more easily with large documents; my test script needs a huge 
> number of documents to trigger it a small number of times, whereas it happens 
> ~5% of the time on our dataset.
> * We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
> * When not running anything in parallel, this doesn't occur.
> Quick aside, since this is surely the first thing that will jump out:  We 
> can't just do an update due to to the uniqueKey/_root_ issue behind SOLR-5211.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9925) Child documents missing from replicas during parallel delete+add

2017-01-04 Thread Dan Sirotzke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dan Sirotzke updated SOLR-9925:
---
Description: 
When pushing documents to Solr in parallel, doing a delete-by-query and then 
add for the same set of IDs within each thread results in some of the replicas 
missing some of the child documents.  All the parent documents are successfully 
replicated.

This appears to trigger some sort of race condition, since:

* Documents are never missing from the leader.
* Documents _might_ be missing from the replicas.
* When they are missing, the number and which documents are different for each 
replica and each run.
* It happens more easily with large documents; my test script needs a huge 
number of documents to trigger it a small number of times, whereas it happens 
~5% of the time on our dataset.
* We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
* When not running anything in parallel, this doesn't occur.

Quick aside, since this is surely the first thing that will jump out:  We can't 
just do an update due to to the uniqueKey/\_root\_ issue behind SOLR-5211.

  was:
When pushing documents to Solr in parallel, doing a delete-by-query and then 
add for the same set of IDs within each thread results in some of the replicas 
missing some of the child documents.  All the parent documents are successfully 
replicated.

This appears to trigger some sort of race condition, since:

* Documents are never missing from the leader.
* Documents _might_ be missing from the replicas.
* When they are missing, the number and which documents are different for each 
replica and each run.
* It happens more easily with large documents; my test script needs a huge 
number of documents to trigger it a small number of times, whereas it happens 
~5% of the time on our dataset.
* We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
* When not running anything in parallel, this doesn't occur.

Quick aside, since this is surely the first thing that will jump out:  We can't 
just do an update due to to the uniqueKey/_root_ issue behind SOLR-5211.


> Child documents missing from replicas during parallel delete+add
> 
>
> Key: SOLR-9925
> URL: https://issues.apache.org/jira/browse/SOLR-9925
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.3
> Environment: Java 1.8 (OpenJDK) on both CentOS 6.7 and Ubuntu 16.04.1
>Reporter: Dan Sirotzke
> Attachments: generate.py, run.sh
>
>
> When pushing documents to Solr in parallel, doing a delete-by-query and then 
> add for the same set of IDs within each thread results in some of the 
> replicas missing some of the child documents.  All the parent documents are 
> successfully replicated.
> This appears to trigger some sort of race condition, since:
> * Documents are never missing from the leader.
> * Documents _might_ be missing from the replicas.
> * When they are missing, the number and which documents are different for 
> each replica and each run.
> * It happens more easily with large documents; my test script needs a huge 
> number of documents to trigger it a small number of times, whereas it happens 
> ~5% of the time on our dataset.
> * We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
> * When not running anything in parallel, this doesn't occur.
> Quick aside, since this is surely the first thing that will jump out:  We 
> can't just do an update due to to the uniqueKey/\_root\_ issue behind 
> SOLR-5211.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9925) Child documents missing from replicas during parallel delete+add

2017-01-04 Thread Dan Sirotzke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798866#comment-15798866
 ] 

Dan Sirotzke edited comment on SOLR-9925 at 1/4/17 5:47 PM:


run.sh - Test script for setting up solr


was (Author: dsirotzke):
Test script for setting up solr

> Child documents missing from replicas during parallel delete+add
> 
>
> Key: SOLR-9925
> URL: https://issues.apache.org/jira/browse/SOLR-9925
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.3
> Environment: Java 1.8 (OpenJDK) on both CentOS 6.7 and Ubuntu 16.04.1
>Reporter: Dan Sirotzke
> Attachments: run.sh
>
>
> When pushing documents to Solr in parallel, doing a delete-by-query and then 
> add for the same set of IDs within each thread results in some of the 
> replicas missing some of the child documents.  All the parent documents are 
> successfully replicated.
> This appears to trigger some sort of race condition, since:
> * Documents are never missing from the leader.
> * Documents _might_ be missing from the replicas.
> * When they are missing, the number and which documents are different for 
> each replica and each run.
> * It happens more easily with large documents; my test script needs a huge 
> number of documents to trigger it a small number of times, whereas it happens 
> ~5% of the time on our dataset.
> * We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
> * When not running anything in parallel, this doesn't occur.
> Quick aside, since this is surely the first thing that will jump out:  We 
> can't just do an update due to to the uniqueKey/_root_ issue behind SOLR-5211.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9925) Child documents missing from replicas during parallel delete+add

2017-01-04 Thread Dan Sirotzke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dan Sirotzke updated SOLR-9925:
---
Attachment: run.sh

Test script for setting up solr

> Child documents missing from replicas during parallel delete+add
> 
>
> Key: SOLR-9925
> URL: https://issues.apache.org/jira/browse/SOLR-9925
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.3
> Environment: Java 1.8 (OpenJDK) on both CentOS 6.7 and Ubuntu 16.04.1
>Reporter: Dan Sirotzke
> Attachments: run.sh
>
>
> When pushing documents to Solr in parallel, doing a delete-by-query and then 
> add for the same set of IDs within each thread results in some of the 
> replicas missing some of the child documents.  All the parent documents are 
> successfully replicated.
> This appears to trigger some sort of race condition, since:
> * Documents are never missing from the leader.
> * Documents _might_ be missing from the replicas.
> * When they are missing, the number and which documents are different for 
> each replica and each run.
> * It happens more easily with large documents; my test script needs a huge 
> number of documents to trigger it a small number of times, whereas it happens 
> ~5% of the time on our dataset.
> * We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
> * When not running anything in parallel, this doesn't occur.
> Quick aside, since this is surely the first thing that will jump out:  We 
> can't just do an update due to to the uniqueKey/_root_ issue behind SOLR-5211.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9925) Child documents missing from replicas during parallel delete+add

2017-01-04 Thread Dan Sirotzke (JIRA)
Dan Sirotzke created SOLR-9925:
--

 Summary: Child documents missing from replicas during parallel 
delete+add
 Key: SOLR-9925
 URL: https://issues.apache.org/jira/browse/SOLR-9925
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.3, 5.5.2
 Environment: Java 1.8 (OpenJDK) on both CentOS 6.7 and Ubuntu 16.04.1
Reporter: Dan Sirotzke


When pushing documents to Solr in parallel, doing a delete-by-query and then 
add for the same set of IDs within each thread results in some of the replicas 
missing some of the child documents.  All the parent documents are successfully 
replicated.

This appears to trigger some sort of race condition, since:

* Documents are never missing from the leader.
* Documents _might_ be missing from the replicas.
* When they are missing, the number and which documents are different for each 
replica and each run.
* It happens more easily with large documents; my test script needs a huge 
number of documents to trigger it a small number of times, whereas it happens 
~5% of the time on our dataset.
* We're currently on Solr 5.5.2, but I've also managed to trigger it on 6.3.0
* When not running anything in parallel, this doesn't occur.

Quick aside, since this is surely the first thing that will jump out:  We can't 
just do an update due to to the uniqueKey/_root_ issue behind SOLR-5211.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9820) PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields private

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798848#comment-15798848
 ] 

ASF subversion and git services commented on SOLR-9820:
---

Commit 37859a35df3b9324017afdb2ed0870a1c6c6f30e in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=37859a3 ]

SOLR-9820: change PerSegmentSingleValuedFaceting.(contains|ignoreCase) from 
default to private visibility. (Jonny Marks via Christine Poerschke)


> PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields 
> private
> 
>
> Key: SOLR-9820
> URL: https://issues.apache.org/jira/browse/SOLR-9820
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Priority: Minor
> Attachments: SOLR-9820.patch
>
>
> This patch marks the "contains" and "ignoreCase" fields in 
> PerSegmentSingleValuedFaceting private (they are currently public). 
> A separate patch will follow where I propose to replace them with a 
> customizable variant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7618) Hypothetical perf improvements in DocValuesRangeQuery: reducing comparisons for some queries/segments

2017-01-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798829#comment-15798829
 ] 

Hoss Man commented on LUCENE-7618:
--

bq. I could be wrong, but it looks to me that the cut that we are trying to cut 
here is low compared to the cost of running a query, like checking live docs, 
running first the approximation and then the two-phase confirmation, calling 
the collector, etc. Adding more implementations might also make Hotspot's job 
more complicated.

I would guess you are correct.

bq. I am not entirely sure what was your motivation for reviewing doc values 
ranges, ...

I didn't have a particularly strong motivation, it was just some idle 
experimentation during my vacation based on the converstaion i mentioned ...
* i was already looking at the DocValuesRangeQueries to see if my hunch was 
correct
* saving one comparison per doc for common queries _seemed_ like a nice win
* Solr still uses DocValuesRangeQueries, and even if/when solr starts using 
points, i've seen enough people who value index size over speed *AND* need to 
do range queries on sort fields that i can't imagine completely eliminating 
usage of it completely because some users will still want non-stored, 
non-indexed, non-points, docvalues fields" (especailly once updatable docvalues 
support finally lands in solr, then even if you don't mind bigger indexes, 
you'll want/need DocValuesRangeQueries to be able to query against your updated 
docvalue fields)
* saving one comparison per doc for common queries _seemed_ like an easy win.
* I didn't/don't have enough familiarity with the points (query) code to guess 
if/where/what might be an equivalent otimization - so i wanted to start with 
the code i understood first.

bq. I have an open semi-related issue ... LUCENE-7055.

thanks for that pointer ... my head already hurts from reading the first few 
comments :)

> Hypothetical perf improvements in DocValuesRangeQuery: reducing comparisons 
> for some queries/segments
> -
>
> Key: LUCENE-7618
> URL: https://issues.apache.org/jira/browse/LUCENE-7618
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: LUCENE-7618.patch
>
>
> In reviewing the DocValuesRangeQuery code, it occured to me that there 
> _might_ be some potential performance optimizations possible in a few cases 
> relating queries that involve explicitly specified open ranges (ie: min or 
> max are null) or in the case of SortedSet: range queries that are 
> *effectively* open ended on particular segments, because the min/max are 
> below/above the minOrd/maxOrd for the segment.
> Since these seemed like semi-common situations (open ended range queries are 
> fairly common in my experience, i'm not sure about the secondary SortedSet 
> "ord" case, but it seemd potentially promising particularly for fields like 
> incrementing ids, or timestamps, where values are added sequentially and 
> likeley to be clustered together) I did a bit of experimenting and wanted to 
> post my findings in jira -- patch & details to follow in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-3990:
--
Fix Version/s: (was: 6.0)
   (was: 4.10)
   6.4

> index size unavailable in gui/mbeans unless replication handler configured
> --
>
> Key: SOLR-3990
> URL: https://issues.apache.org/jira/browse/SOLR-3990
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-3990.patch, SOLR-3990.patch, SOLR-3990.patch
>
>
> Unless you configure the replication handler, the on-disk size of each core's 
> index seems to be unavailable in the gui or from the mbeans handler.  If you 
> are not doing replication, you should still be able to get the size of each 
> index without configuring things that won't be used.
> Also, I would like to get the size of the index in a consistent unit of 
> measurement, probably MB.  I understand the desire to give people a human 
> readable unit next to a number that's not enormous, but it's difficult to do 
> programmatic comparisons between values such as 787.33 MB and 23.56 GB.  That 
> may mean that the number needs to be available twice, one format to be shown 
> in the admin GUI and both formats available from the mbeans handler, for 
> scripting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798805#comment-15798805
 ] 

Erick Erickson commented on SOLR-6246:
--

Thanks Andreas!

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured

2017-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-3990:
--
Attachment: SOLR-3990.patch

Only changes in this are the hashes for the diff (applied to current master). 
No code changes.

> index size unavailable in gui/mbeans unless replication handler configured
> --
>
> Key: SOLR-3990
> URL: https://issues.apache.org/jira/browse/SOLR-3990
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 4.10, 6.0
>
> Attachments: SOLR-3990.patch, SOLR-3990.patch, SOLR-3990.patch
>
>
> Unless you configure the replication handler, the on-disk size of each core's 
> index seems to be unavailable in the gui or from the mbeans handler.  If you 
> are not doing replication, you should still be able to get the size of each 
> index without configuring things that won't be used.
> Also, I would like to get the size of the index in a consistent unit of 
> measurement, probably MB.  I understand the desire to give people a human 
> readable unit next to a number that's not enormous, but it's difficult to do 
> programmatic comparisons between values such as 787.33 MB and 23.56 GB.  That 
> may mean that the number needs to be available twice, one format to be shown 
> in the admin GUI and both formats available from the mbeans handler, for 
> scripting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3990) index size unavailable in gui/mbeans unless replication handler configured

2017-01-04 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798792#comment-15798792
 ] 

Dennis Gove commented on SOLR-3990:
---

I agree, I think it still makes sense to put this in a more central location. 
I'd expect to be able to ask the core what the index size is.

> index size unavailable in gui/mbeans unless replication handler configured
> --
>
> Key: SOLR-3990
> URL: https://issues.apache.org/jira/browse/SOLR-3990
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Shawn Heisey
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 4.10, 6.0
>
> Attachments: SOLR-3990.patch, SOLR-3990.patch
>
>
> Unless you configure the replication handler, the on-disk size of each core's 
> index seems to be unavailable in the gui or from the mbeans handler.  If you 
> are not doing replication, you should still be able to get the size of each 
> index without configuring things that won't be used.
> Also, I would like to get the size of the index in a consistent unit of 
> measurement, probably MB.  I understand the desire to give people a human 
> readable unit next to a number that's not enormous, but it's difficult to do 
> programmatic comparisons between values such as 787.33 MB and 23.56 GB.  That 
> may mean that the number needs to be available twice, one format to be shown 
> in the admin GUI and both formats available from the mbeans handler, for 
> scripting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates

2017-01-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798787#comment-15798787
 ] 

Hoss Man commented on SOLR-8030:


bq. The point is that for main operations, the Document Update Processors do 
not have access to the Solr request.

I'm not sure what you mean by that -- during normal update processing the 
SolrQueryRequest is available to UpdateProcessorFactories as part of the 
{{getInstance(...)}} call as well as to the UpdateProcessors themselves via 
{{UpdateCommand.getReq()}} and some factories/processors (like 
StatelessScriptUpdateProcessorFactory in my example) access the request to get 
the request params and make conditional choices based on those values -- but 
unless i'm completely missing something, things like the request params are not 
written to the tlog when the UpdateCommands are written to the tlog.

hence: the scope of the problem goes beyond just keeping a record of the update 
chain needed for each command in the tlog, it's a matter of tracking _ALL_ the 
relevant variables (chain, req params, etc...)

bq. But I do not agree with [~dsmiley] that this should not be fixed because of 
very specific use cases.

I suspect you missunderstood david -- either that or i did -- i don't think 
he's of the opinion that this shouldn't be fixed, my understanding of his 
comments is that: 
* he agrees the entire situation is bad, and should fixed in some way
* he would rather look a the big picture and "rip the bandaid off" with 
whatever sort of invasive fix is needed then add more band-aids on top of the 
code we already have
* until such time as we have a "good" fix, the workaround is to encourage 
people to only use chains/updates that _will_ work in spite of this bug.

i would agree with those three sentiments, even if they aren't what he actually 
ment :)

> Transaction log does not store the update chain (or req params?) used for 
> updates
> -
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain, or any other details from 
> the original update request such as the request params, used during updates.
> Therefore tLog uses the default update chain, and a synthetic request, during 
> log replay.
> If we implement custom update logic with multiple distinct update chains that 
> use custom processors after DistributedUpdateProcessor, or if the default 
> chain uses processors whose behavior depends on other request params, then 
> log replay may be incorrect.
> Potentially problematic scenerios (need test cases):
> * DBQ where the main query string uses local param variables that refer to 
> other request params
> * custom Update chain set as {{default="true"}} using something like 
> StatelessScriptUpdateProcessorFactory after DUP where the script depends on 
> request params.
> * multiple named update chains with diff processors configured after DUP and 
> specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom 
> formats configured after DUP in some special chains, but not in the default 
> chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9883:
-
Attachment: SOLR-9883.patch

Attaching a patch that switches example configs's add-unknown-fields-to-schema 
update chains so that the DUP is after the AddSchemaFields URPF.  In my manual 
testing (see below), this prevents the data corruption: the buffered tlog entry 
includes the date normalization. I also made {{AddSchemaFields}} URPF implement 
{{UpdateRequestProcessorFactory.RunAlways}}, so that schema modifications will 
continue to be applied on all replicas (the original rationale for moving the 
DUP position on SOLR-6137). 

Following an offline reproduction suggestion from [~hossman], I was able to 
manually reproduce the data corruption as follows:

# Added an artificial 1-minute delay in {{PeerSync}}
# {{bin/solr start -e cloud  # nodes=2, coll=gettingstarted, shards=1, rf=2, 
configset=data_driven_schema_configs}}
# {{curl -X POST -H 'Content-type: application/xml' 
http://localhost:8983/solr/gettingstarted/update -d '2015-06-09'}}
# {{kill -9 $(cat bin/solr-7574.pid)}}
# {{curl -X POST -H 'Content-type: application/xml' 
http://localhost:8983/solr/gettingstarted/update -d '2015-06-10'}}
# {{bin/solr start -cloud -p 7574 -s "example/cloud/node2/solr" -z 
localhost:9983}}
# {{curl -X POST -H 'Content-type: application/xml' 
http://localhost:8983/solr/gettingstarted/update -d '2015-06-11'}}

I had to add step #3 to create a transaction log entry on the 7574 replica 
prior to shutdown; otherwise on restart it would refuse to perform peer sync, 
because it didn't know where to start (due to no recent versions in the tlog) 
and instead initiated full recovery.

I'm working on an automated data corruption test.

I want to get this change into the 6.4 release.

> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Attachments: SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 247 - Failure

2017-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/247/

16 tests failed.
FAILED:  org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([30A477A2590FE042:D3A02C27F2142405]:0)
at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307)
at org.apache.lucene.util.fst.FST.addNode(FST.java:792)
at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126)
at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:214)
at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:310)
at org.apache.lucene.util.fst.Builder.add(Builder.java:414)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:254)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.access$500(MemoryPostingsFormat.java:109)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$MemoryFieldsConsumer.write(MemoryPostingsFormat.java:396)
at 
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4343)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3920)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2071)
at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4984)
at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5022)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5013)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1578)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1320)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
at 
org.apache.lucene.index.TestDuelingCodecs.createRandomIndex(TestDuelingCodecs.java:139)
at 
org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals(TestDuelingCodecsAtNight.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)


FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_idx

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([4CA1403BEAFFF7C0:D9DD35F336235A21]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:188)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.waitForRecoveriesToFinish(TestStressCloudBlindAtomicUpdates.java:459)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:304)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_idx(TestStressCloudBlindAtomicUpdates.java:224)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-9820) PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields private

2017-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798748#comment-15798748
 ] 

ASF subversion and git services commented on SOLR-9820:
---

Commit 2f62facc0b93095d0be616a5cbda7d0f9ae20747 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f62fac ]

SOLR-9820: change PerSegmentSingleValuedFaceting.(contains|ignoreCase) from 
default to private visibility. (Jonny Marks via Christine Poerschke)


> PerSegmentSingleValuedFaceting - mark "contains" and "ignoreCase" fields 
> private
> 
>
> Key: SOLR-9820
> URL: https://issues.apache.org/jira/browse/SOLR-9820
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Priority: Minor
> Attachments: SOLR-9820.patch
>
>
> This patch marks the "contains" and "ignoreCase" fields in 
> PerSegmentSingleValuedFaceting private (they are currently public). 
> A separate patch will follow where I propose to replace them with a 
> customizable variant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread James Doepp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798673#comment-15798673
 ] 

James Doepp commented on SOLR-6246:
---

My "workaround" was to create a separate core with only the suggesters.

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread Andreas Ravn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798664#comment-15798664
 ] 

Andreas Ravn commented on SOLR-6246:


Hi Erick,

I did. Please see my reply to Christoph Froeschels comment from Dec 19, 2016.

Cheers, Andreas

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2017-01-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15798653#comment-15798653
 ] 

Erick Erickson commented on SOLR-6246:
--

It'd be great if you could give a recent Solr a spin and see if the problem 
persists. 

There are a bunch of JIRAs floating around with some hints that this is fixed 
but nobody has volunteered to verify that this JIRA is fixed by them.
See: 
LUCENE-7564
SOLR-6100


> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >