[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+107) - Build # 10 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/10/
Java: 32bit/jdk-9-ea+107 -server -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImport

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:41354/solr/collection1

Stack Trace:
java.lang.AssertionError: IOException occured when talking to server at: 
http://127.0.0.1:41354/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([55C34E2AAE168FF3:D06F33B1161911D3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImport(TestSolrEntityProcessorEndToEnd.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:804)


FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportFieldsParam

Error Message:
IOException occured when talking to server at: 

[jira] [Updated] (SOLR-5750) Backup/Restore API for SolrCloud

2016-03-03 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-5750:

Attachment: SOLR-5750.patch

- Added SolrJ support for Backup and Restore Collection Admin actions
- 2 API calls - Backup and Restore . Both support async and is recommended to 
use them for polling to see if the task completed. There are they BackupStatus 
and RestoreStatus commands like there were in previous patches.

*Backup*:
Required Params - name and collection. 
"location" can be optionally set via the cluster prop api. If the query 
parameter does not have it we refer to the value set in the cluster prop api

What it backs up in the location directory
 - Index data from the shard leaders
 - collection_state_backup.json ( the backed up collection state )
 -  backup.properties ( meta-data information )
 - configSet

*Restore*:
Required Params - name and collection. 
"location" can be optionally set via the cluster prop api. If the query 
parameter does not have it we refer to the value set in the cluster prop api

How it works
 - The restore collection name should not be present . Restore will create it 
for you. You can use collection alias to use it once it has been restored. We 
purposely don’t allow restoring into an existing collection since rolling back 
in a distributed setup would be tricky . Maybe in the future if we are 
confident we can allow this.
 - Creates a core-less collection with the config set from the backup ( it 
appends a restore.configSetName to it for avoiding collissions )
 - Marks the shards in "construction" state so that if someone is sending it 
documents they get buffered in the tlog . TODO don't do
 - Create one replica per shard and restore the data into this
 - Adds the necessary replicas to meet the same replication factor

bq. Another question is I wonder if any of these loops should be done in 
parallel or if they are issuing asynchronous requests so it isn't necessary. It 
would help to document the pertinent loops with this information, and possibly 
do some in parallel if they should be done so.

Yes that makes sense. We need to add this

bq. I looked at the patch. On the restore side I noticed a loop of slices and 
then a loop of replicas starting with this comment: "//Copy data from backed up 
index to each replica". Shouldn't there be just one replica per shard to 
restore, and then later the replicationFactor will expand to the desired level?

Yeah true. This patch has those changes.

It's still a work in progress. The restore needs hardening.

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7056) Spatial3d/Geo3d should have zero runtime dependencies

2016-03-03 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179430#comment-15179430
 ] 

David Smiley commented on LUCENE-7056:
--

bq. I'm also not a fan of the name *3d. It causes confusion that this package 
handles altitude.

I agree as well but I'm at a loss to come up with a better name.  Any 
suggestions?  sphere-geom?  Although it does ellipsoid too.  At least "Geo3d" 
was the contribution name and current package name; and I've presented it under 
that name at a conference & meetup.

bq. Since its dependency free I still think it makes most sense to refactor it 
to the spatial module under a geometry package and just have the spatial and 
spatial-extras modules. Having a third just for a geometry model doesn't make 
sense to me.

I realize my view may be unpopular but in my view, this module is _itself_ a 
dependency, albeit a special one we have control over to modify as we please 
for our convenience.

> Spatial3d/Geo3d should have zero runtime dependencies
> -
>
> Key: LUCENE-7056
> URL: https://issues.apache.org/jira/browse/LUCENE-7056
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
>
> This is a proposal for the "spatial3d" module to be purely about the 
> shape/geometry implementations it has.  In Lucene 5 that's actually all it 
> has.  In Lucene 6 at the moment its ~76 files have 2 classes that I think 
> should go elsewhere: Geo3DPoint and PointInGeo3DShapeQuery.  Specifically 
> lucene-spatial-extras (which doesn't quite exist yet so lucene-spatial) would 
> be a suitable place due to the dependency.   _Eventually_ I see this module 
> migrating elsewhere be it on its own or a part of something else more 
> spatial-ish.  Even if that never comes to pass, non-Lucene users who want to 
> use this module for it's geometry annoyingly have to exclude the Lucene 
> dependencies that are there because this module also contains these two 
> classes.
> In a comment I'll suggest some specifics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+107) - Build # 9 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/9/
Java: 64bit/jdk-9-ea+107 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.lucene.index.TestDuelingCodecs.testCrazyReaderEquals

Error Message:
source=3 is out of bounds (maxState is 2)

Stack Trace:
java.lang.IllegalArgumentException: source=3 is out of bounds (maxState is 2)
at 
__randomizedtesting.SeedInfo.seed([6260A72A7B9657E5:D65A7D1677DF8AE6]:0)
at 
org.apache.lucene.util.automaton.Automaton.addTransition(Automaton.java:165)
at 
org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:245)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:537)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:612)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:614)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:521)
at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)
at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
at 
org.apache.lucene.util.LuceneTestCase.assertTermsEquals(LuceneTestCase.java:2025)
at 
org.apache.lucene.util.LuceneTestCase.assertFieldsEquals(LuceneTestCase.java:1987)
at 
org.apache.lucene.util.LuceneTestCase.assertReaderEquals(LuceneTestCase.java:1947)
at 
org.apache.lucene.index.TestDuelingCodecs.testCrazyReaderEquals(TestDuelingCodecs.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+107) - Build # 16092 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16092/
Java: 32bit/jdk-9-ea+107 -client -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=458, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)2) Thread[id=457, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)3) Thread[id=459, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)4) Thread[id=456, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=460, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=458, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
at 

[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-03 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179311#comment-15179311
 ] 

Robert Muir commented on LUCENE-6993:
-

If that test really took 50 minutes, there may be some issue there...

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-03 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated LUCENE-6993:
--
Attachment: LUCENE-6993.patch

bq. Well this test is already marked @Slow and just took 41.2s on my machine. 
Were you seeing stuff like that? As far as i know from the original issue, 
there were tests for this bug that would basically never finish at all .
I left it to run and came back later and saw that it took 50 minutes. But it 
passed. 40 seconds on your machine sounds great, I won't worry about it, thanks.

bq. Mike, can you please exclude generated files from your patch? The patches 
here are way big, and reviewers/committers will want to regenerate anyway.
Sure, this makes sense.

Steps to generate everything:
{code}
#!/usr/bin/env bash

pushd lucene/analysis/common
ANT_OPTS="-Xmx5g" ant gen-tlds jflex
ant jflex-legacy # For some reason this needs to be run separately from the 
jflex command. I could never figure out why.
pushd src/test/org/apache/lucene/analysis/standard
rm WordBreakTestUnicode_6_3_0.java
perl generateJavaUnicodeWordBreakTest.pl -v 8.0.0
popd
popd
{code}

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-03-03 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179287#comment-15179287
 ] 

Jason Gerlowski commented on SOLR-8097:
---

Hmm, still seeing some test failures with this patch.  Haven't narrowed down 
the problem yet, but I'm working on it.

Not ready for review yet (other than general feedback on how to expose the new 
builders into existing tests).

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+107) - Build # 8 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/8/
Java: 64bit/jdk-9-ea+107 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.util.automaton.TestAutomaton.testMakeBinaryIntervalFiniteCasesRandom

Error Message:
24

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: 24
at 
__randomizedtesting.SeedInfo.seed([1785A46AE47690B4:6F84B492479EAEE5]:0)
at 
org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:235)
at 
org.apache.lucene.util.automaton.TestAutomaton.makeBinaryInterval(TestAutomaton.java:1159)
at 
org.apache.lucene.util.automaton.TestAutomaton.testMakeBinaryIntervalFiniteCasesRandom(TestAutomaton.java:1243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:804)




Build Log:
[...truncated 339 lines...]
   [junit4] Suite: org.apache.lucene.util.automaton.TestAutomaton
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestAutomaton 
-Dtests.method=testMakeBinaryIntervalFiniteCasesRandom 
-Dtests.seed=1785A46AE47690B4 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=lg-UG -Dtests.timezone=Etc/GMT-12 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.16s J0 | 
TestAutomaton.testMakeBinaryIntervalFiniteCasesRandom <<<
   [junit4]> Throwable #1: 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16091 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16091/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([17523C86B675B2FD]:0)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([17523C86B675B2FD]:0)




Build Log:
[...truncated 12461 lines...]
   [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest
   [junit4]   2> 1991630 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1991636 INFO  (Thread-5551) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1991636 INFO  (Thread-5551) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1991736 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.ZkTestServer start zk server on port:54504
   [junit4]   2> 1991736 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1991736 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1991789 INFO  (zkCallback-3184-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@439afb21 
name:ZooKeeperConnection Watcher:127.0.0.1:54504 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1991789 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1991790 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1991790 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[17523C86B675B2FD]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml
   [junit4]   2> 1991876 INFO  (jetty-launcher-3183-thread-3) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 1991876 INFO  (jetty-launcher-3183-thread-1) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 1991876 INFO  (jetty-launcher-3183-thread-5) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 1991876 INFO  (jetty-launcher-3183-thread-4) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 1991876 INFO  (jetty-launcher-3183-thread-2) [] 
o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 1991878 INFO  (jetty-launcher-3183-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6ae97d2{/solr,null,AVAILABLE}
   [junit4]   2> 1991878 INFO  (jetty-launcher-3183-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@28d7d216{/solr,null,AVAILABLE}
   [junit4]   2> 1991878 INFO  (jetty-launcher-3183-thread-5) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6ed7d0db{/solr,null,AVAILABLE}
   [junit4]   2> 1991878 INFO  (jetty-launcher-3183-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@e107f0e{/solr,null,AVAILABLE}
   [junit4]   2> 1991878 INFO  (jetty-launcher-3183-thread-4) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1601c32c{/solr,null,AVAILABLE}
   [junit4]   2> 1991880 INFO  (jetty-launcher-3183-thread-2) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@d4ae9a5{HTTP/1.1,[http/1.1]}{127.0.0.1:43131}
   [junit4]   2> 1991880 INFO  (jetty-launcher-3183-thread-1) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@2845afd7{HTTP/1.1,[http/1.1]}{127.0.0.1:60155}
   [junit4]   2> 1991881 INFO  (jetty-launcher-3183-thread-2) [] 
o.e.j.s.Server Started @1993464ms
   [junit4]   2> 1991880 INFO  (jetty-launcher-3183-thread-4) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@26b312b6{HTTP/1.1,[http/1.1]}{127.0.0.1:60949}
   [junit4]   2> 1991881 INFO  (jetty-launcher-3183-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=43131}
   [junit4]   2> 1991881 INFO  (jetty-launcher-3183-thread-4) [] 
o.e.j.s.Server Started @1993466ms
   [junit4]   2> 1991881 INFO  (jetty-launcher-3183-thread-5) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@21f014f8{HTTP/1.1,[http/1.1]}{127.0.0.1:47606}
   [junit4]   2> 1991881 INFO  (jetty-launcher-3183-thread-2) [] 
o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 

[jira] [Resolved] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7063.
-
   Resolution: Fixed
Fix Version/s: 6.0
   master

> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7063.patch, LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179278#comment-15179278
 ] 

ASF subversion and git services commented on LUCENE-7063:
-

Commit 2aa412132d0a32ab8c9fab538ba55bf09d24cd90 in lucene-solr's branch 
refs/heads/branch_6_0 from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2aa4121 ]

LUCENE-7063: add tests/docs for numericutils, rename confusing methods, remove 
overlap with LegacyNumericUtils


> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7063.patch, LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr git commit: Fix TestBackwardsCompatibility.testAllVersionsTested to consider 5.5 as an expected version

2016-03-03 Thread Shalin Shekhar Mangar
Strangely, the same seed does not fail on branch_6_0 even though it
has the same trunk-only block of code.

On Fri, Mar 4, 2016 at 2:14 AM, Uwe Schindler  wrote:
> Hi,
>
> This may also need to be backported to 6.0 branch. We just have no jenkins 
> jobs up to now, so this may not have been detected!?
> I will try this out later and commit if needed.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>> -Original Message-
>> From: sha...@apache.org [mailto:sha...@apache.org]
>> Sent: Thursday, March 03, 2016 7:31 PM
>> To: comm...@lucene.apache.org
>> Subject: lucene-solr git commit: Fix
>> TestBackwardsCompatibility.testAllVersionsTested to consider 5.5 as an
>> expected version
>>
>> Repository: lucene-solr
>> Updated Branches:
>>   refs/heads/branch_6x e344ab1d0 -> 89a02361f
>>
>>
>> Fix TestBackwardsCompatibility.testAllVersionsTested to consider 5.5 as an
>> expected version
>>
>>
>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
>> Commit: http://git-wip-us.apache.org/repos/asf/lucene-
>> solr/commit/89a02361
>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/89a02361
>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/89a02361
>>
>> Branch: refs/heads/branch_6x
>> Commit: 89a02361fe5eb6c4ae84a1bf773b72658ecf2f44
>> Parents: e344ab1
>> Author: Shalin Shekhar Mangar 
>> Authored: Fri Mar 4 00:00:37 2016 +0530
>> Committer: Shalin Shekhar Mangar 
>> Committed: Fri Mar 4 00:00:37 2016 +0530
>>
>> --
>>  .../lucene/index/TestBackwardsCompatibility.java  | 18 --
>>  1 file changed, 18 deletions(-)
>> --
>>
>>
>> http://git-wip-us.apache.org/repos/asf/lucene-
>> solr/blob/89a02361/lucene/backward-
>> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
>> --
>> diff --git a/lucene/backward-
>> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
>> b/lucene/backward-
>> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
>> index 57b8d2d..f8956d5 100644
>> --- a/lucene/backward-
>> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
>> +++ b/lucene/backward-
>> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
>> @@ -416,24 +416,6 @@ public class TestBackwardsCompatibility extends
>> LuceneTestCase {
>>}
>>  }
>>
>> -// BEGIN TRUNK ONLY BLOCK
>> -// on trunk, the last release of the prev major release is also untested
>> -Version lastPrevMajorVersion = null;
>> -for (java.lang.reflect.Field field : Version.class.getDeclaredFields()) 
>> {
>> -  if (Modifier.isStatic(field.getModifiers()) && field.getType() ==
>> Version.class) {
>> -Version v = (Version)field.get(Version.class);
>> -Matcher constant = constantPattern.matcher(field.getName());
>> -if (constant.matches() == false) continue;
>> -if (v.major == Version.LATEST.major - 1 &&
>> -(lastPrevMajorVersion == null || 
>> v.onOrAfter(lastPrevMajorVersion)))
>> {
>> -  lastPrevMajorVersion = v;
>> -}
>> -  }
>> -}
>> -assertNotNull(lastPrevMajorVersion);
>> -expectedVersions.remove(lastPrevMajorVersion.toString() + "-cfs");
>> -// END TRUNK ONLY BLOCK
>> -
>>  Collections.sort(expectedVersions);
>>
>>  // find what versions we are testing
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8783) The creation of replicas in split shard is contrived

2016-03-03 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179274#comment-15179274
 ] 

Shalin Shekhar Mangar commented on SOLR-8783:
-

SOLR-7673 is the reason why it is done this way. However, now I think we can 
eliminate the step which creates the replicas in the cluster state. We still 
need to perform the actual add replica call after the slice state has been 
updated. I'll create a patch.

> The creation of replicas in split shard is contrived
> 
>
> Key: SOLR-8783
> URL: https://issues.apache.org/jira/browse/SOLR-8783
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Shalin Shekhar Mangar
>
> The replica creation in splitshard() is done in two steps. first the entry is 
> created in the clusterstate and subsequently addReplica() is called. The 
> later step can be eliminated and the code can be simplified



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8728) Splitting a shard of a collection created with a rule fails with NPE

2016-03-03 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179260#comment-15179260
 ] 

Shalin Shekhar Mangar commented on SOLR-8728:
-

Thanks Noble. 
{code}
+  List subSliceNames =  new ArrayList<>();
+  for (int i = 0; i < subSlices.size(); i++) subSliceNames.add(slice + "_" 
+ i);
{code}

This seems redundant because the "subSlices" list already has all the sub-slice 
names.

The rest looks good!

> Splitting a shard of a collection created with a rule fails with NPE
> 
>
> Key: SOLR-8728
> URL: https://issues.apache.org/jira/browse/SOLR-8728
> Project: Solr
>  Issue Type: Bug
>Reporter: Shai Erera
>Assignee: Noble Paul
> Attachments: SOLR-8728.patch, SOLR-8728.patch
>
>
> Spinoff from this discussion: http://markmail.org/message/f7liw4hqaagxo7y2
> I wrote a short test which reproduces, will upload shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_72) - Build # 5665 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5665/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 12 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: 
org.eclipse.jgit.api.errors.TransportException: Connection reset
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:639)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
at ..remote call to Windows VBOX(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor544.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy51.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
Caused by: org.eclipse.jgit.api.errors.TransportException: Connection reset
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:139)
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:637)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.eclipse.jgit.errors.TransportException: Connection reset
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:182)
at 
org.eclipse.jgit.transport.TransportGitAnon$TcpFetchConnection.(TransportGitAnon.java:194)
at 
org.eclipse.jgit.transport.TransportGitAnon.openFetch(TransportGitAnon.java:120)
at 
org.eclipse.jgit.transport.FetchProcess.executeImp(FetchProcess.java:136)
at 
org.eclipse.jgit.transport.FetchProcess.execute(FetchProcess.java:122)
at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1138)
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:130)
... 11 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)

[jira] [Commented] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179234#comment-15179234
 ] 

ASF subversion and git services commented on LUCENE-7063:
-

Commit bea235f711d03165cd92f7a59c7de54ff2785d98 in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bea235f ]

LUCENE-7063: add tests/docs for numericutils, rename confusing methods, remove 
overlap with LegacyNumericUtils


> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7063.patch, LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179198#comment-15179198
 ] 

ASF subversion and git services commented on LUCENE-7063:
-

Commit 3ffeccab7e9949d7cc1e43027d9347a8968131b2 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3ffecca ]

LUCENE-7063: add tests/docs for numericutils, rename confusing methods, remove 
overlap with LegacyNumericUtils


> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7063.patch, LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-03-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8769.

Resolution: Fixed

> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179108#comment-15179108
 ] 

ASF subversion and git services commented on SOLR-8769:
---

Commit 18874ababc73404356bd24fef2687d33f9489887 in lucene-solr's branch 
refs/heads/branch_6_0 from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=18874ab ]

SOLR-8769: Fix document exclusion in mlt query parser in Cloud mode for schemas 
that have non-'id' unique field


> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_72) - Build # 7 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/7/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=9264, 
name=testExecutor-3834-thread-2, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=9264, name=testExecutor-3834-thread-2, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:35876
at __randomizedtesting.SeedInfo.seed([3B3F58B15ED69265]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest$1.run(BasicDistributedZkTest.java:586)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:35876
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest$1.run(BasicDistributedZkTest.java:584)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11413 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.UnloadDistributedZkTest_3B3F58B15ED69265-001/init-core-data-001
   [junit4]   2> 1122998 INFO  
(SUITE-UnloadDistributedZkTest-seed#[3B3F58B15ED69265]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1123001 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[3B3F58B15ED69265]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1123001 INFO  (Thread-2806) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1123001 INFO  (Thread-2806) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1123101 INFO  

[jira] [Updated] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7063:

Attachment: LUCENE-7063.patch

Updated patch: adds TestNumericUtils.

I was surprised this class was missing entirely!

I "ported" many methods from TestLegacyNumericUtils to the new encodings: these 
test round-trip, compare, explicitly test "special" values for each type. I 
added BigInteger versions of each too.

I also added random tests for each type (including bigint) that just do simple 
round-tripping and comparisons.

There were tests for NumericUtils binary add() and subtract() methods, but 
these were in TestBKD! I moved those to this test, too.

> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7063.patch, LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-03-03 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179007#comment-15179007
 ] 

Anshum Gupta commented on SOLR-8769:


I'm waiting to hear back from Nick before I port this for the 6.0 release.

> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-03 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178886#comment-15178886
 ] 

Steve Rowe edited comment on LUCENE-6993 at 3/4/16 12:00 AM:
-

Mike, can you please exclude generated files from your patch?  The patches here 
are way big, and reviewers/committers will want to regenerate anyway.


was (Author: steve_rowe):
Mike, can you please exclude generated files from your patch?  The patch here 
are way big, and reviewers/committers will want to regenerate anyway.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-03 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178886#comment-15178886
 ] 

Steve Rowe commented on LUCENE-6993:


Mike, can you please exclude generated files from your patch?  The patch here 
are way big, and reviewers/committers will want to regenerate anyway.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16090 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16090/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=4712, 
name=testExecutor-1755-thread-10, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=4712, name=testExecutor-1755-thread-10, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([661502A41E263D6D:EE413D7EB0DA5095]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:59661/zm/xg
at __randomizedtesting.SeedInfo.seed([661502A41E263D6D]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest$1.run(BasicDistributedZkTest.java:586)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:59661/zm/xg
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest$1.run(BasicDistributedZkTest.java:584)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11284 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_661502A41E263D6D-001/init-core-data-001
   [junit4]   2> 708161 INFO  
(SUITE-UnloadDistributedZkTest-seed#[661502A41E263D6D]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /zm/xg
   [junit4]   2> 708162 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[661502A41E263D6D]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 708162 INFO  (Thread-1305) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 

[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2016-03-03 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178872#comment-15178872
 ] 

Ben Manes commented on SOLR-8241:
-

Percentile stats are best obtained by the metrics library. The stats provided 
by Caffeine are monotonically increasing over the lifetime of the cache. This 
lets the percentiles over a time window be easily calculated by the metrics 
reporter.

The only native time statistic is the load time (cost of computing the entry on 
a miss) because it adds to the user-facing latency. All cache operations are 
O(1) and designed for concurrency, so broadly tracking time would be 
prohibitively expensive given how slow the native time methods are. From 
benchmarks I think the cache offers enough headroom to not be a bottleneck, so 
tracking the hit rate and minimizing the miss penalty are probably the more 
interesting areas to monitor.

I'm not sure what my next steps are to assist here, so let me know if I can be 
of further help.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178866#comment-15178866
 ] 

ASF subversion and git services commented on SOLR-8769:
---

Commit ba039f7c8c28518053776fe9952e5cb93c5b3f75 in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba039f7 ]

SOLR-8769: Fix document exclusion in mlt query parser in Cloud mode for schemas 
that have non-'id' unique field


> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-03-03 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178861#comment-15178861
 ] 

Anshum Gupta commented on SOLR-8769:


Thanks for pointing this out [~ehatcher]. I committed this but forgot to 
specify the JIRA#.
I tried to amend the commit message but for some reason that isn't working. 
Until that happens, thought I'll update the JIRA manually.

Here's the commit hash: 44d8ee9115ebcfdaba03238031b68a58dbcc4cd6

> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8786) Ensure Apache Zeppelin works with Solr's JDBC Driver

2016-03-03 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8786:
-
Description: There is already a ticket open to improve the JDBC driver to 
work with more clients, but I'd like to make a special ticket for Apache 
Zeppelin.   (was: There is already a ticket open to improve the JDBC driver to 
work with more clients, but I'd like to make a special ticket for Apache 
Zeppelin.)

> Ensure Apache Zeppelin works with Solr's JDBC Driver
> 
>
> Key: SOLR-8786
> URL: https://issues.apache.org/jira/browse/SOLR-8786
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> There is already a ticket open to improve the JDBC driver to work with more 
> clients, but I'd like to make a special ticket for Apache Zeppelin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-03-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8769:
---
Fix Version/s: 6.0
   master

> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8786) Ensure Apache Zeppelin works with Solr's JDBC Driver

2016-03-03 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8786:


 Summary: Ensure Apache Zeppelin works with Solr's JDBC Driver
 Key: SOLR-8786
 URL: https://issues.apache.org/jira/browse/SOLR-8786
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein


There is already a ticket open to improve the JDBC driver to work with more 
clients, but I'd like to make a special ticket for Apache Zeppelin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8725.

Resolution: Fixed

> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2016-03-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8423.

Resolution: Fixed

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8423.patch, SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-03-03 Thread Michael Nilsson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178779#comment-15178779
 ] 

Michael Nilsson commented on SOLR-8542:
---

Hey Christine, I've posted a response to most of your comments thus far below.

*doDeleteChild method makes no storeManagedData method call*
We have a ticket for this that we'll fix along with other improvements for our 
next commit.

*ManagedFeatureStore.doGet throws an exception when the childId concerned is 
not present*
We could return a response with no features if desired, we were currently using 
the error response to differentiate between a feature store not existing and 
one existing without any features added to it yet.

*ManagedResource.doPut addFeature could throw an exception when a name being 
updated/added already exists.  Should repeats of the same name simply replace 
the existing entry for that name?*
Typically when you have models deployed using some features, you don't want to 
"update" an existing feature. You should instead add a new feature with your 
updates and deploy a newly trained model using it, because you don't want the 
meaning/value of the original feature used by historical models to change.  
This is to ensure reproducible results when testing an old model that used the 
old version of the feature.  We use this error to prevent this from happening.

*LTRComponent state + use of state separation. Would feature store and model 
store changes still propagate through to ltr_ms*
If you deploy new features to your feature store, you would want to start 
extracting those features, which means we should propagate them down.  We could 
make feature stores write-once, and any new features would require a new 
feature store with all the old ones copied over to avoid this, but that might 
be cumbersome to the user and leave lots of old feature stores around until the 
user cleans them up.
Question: The only reason we currently have the LTRComponent is so that it can 
register the Model and Feature stores as managed resources because it can be 
SolrCore aware.  Is there a way we can do this without the use of a component?

*Branch/commit process*
Everything you said sounds do-able.  The only question I have is regarding 
"'git merge' and 'git rebase' and 'git --force push' will be avoided".  Agreed 
about git force, but if at the end we're going to make a new 
master-ltr-plugin-rfc-march branch, and everything is going to be squashed and 
rebased, why not allow merges into the master-ltr-plugin-rfc to keep up to date 
with master changes instead of cherry-picking everything one by one into it?

*Feature engineering dummy model replacement*
Currently you have to use a dummy model to reference what features you want 
extracted like you said.
{code}fv=true=*,score,[features]={!ltr model=dummyModel 
reRankDocs=25}{code}
The only reason you need the model is because it has a FeatureStore, which has 
all the features you are looking to extract.  Instead, we are planning on 
allowing you to specify which FeatureStore you want to use for feature 
extraction directly in the features Document Transformer.  We will also remove 
the superfluous fv=true parameter, since the document transformer already 
identifies the fact that you want to extract features.  The new expected sample 
request for feature extraction would probably look something like this instead
{code}fl=*,score,[features featureStore=MyFeatures]{code}

*would the efi. parameters move out of the rq*
We will probably also move efi out as well, since you need them for both 
feature extraction and reranking with a model

*might it be useful to have optional version and/or comment string elements in 
the feature*
I think the comment section would be a good idea.  The version touches on the 
what I mentioned earlier about updates vs adds.  We'll have to think about the 
best way to handle this since you don't want to lose/replace versions 1 and 2 
when you deploy version 3 of a feature.

*Could you clarify/outline when/how the "store" element would be used?*
A FeatureStore is a list of features that you want to extract (and use for 
training, logging, or in a model for reranking).  In the majority of the cases, 
you will probably just have 1 feature store, and all iterations of your models 
will use the same feature store, with any new features added to the store.  A 
model cannot use features from other stores.  It may be the case that a single 
collection services many different applications.  If each of those applications 
wants to rerank its results differently and only cares about a subset of 
features, then they could each make their own FeatureStores with their say 100 
features for extraction instead of pulling out the thousands of other features 
that all the other teams made for that same collection.

*Are feature and model stores local to each solr config or can they be shared 
across 

[JENKINS] Lucene-Solr-Tests-master - Build # 944 - Still Failing

2016-03-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/944/

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'null' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":4, "params":{   "x":{ "a":"A val", 
"b":"B val", "_appends_":{"add":"first"}, 
"_invariants_":{"fixed":"f"}, "":{"v":1}},   "y":{ "p":"P 
val", "q":"Q val", "":{"v":2}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'null' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":4,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"_appends_":{"add":"first"},
"_invariants_":{"fixed":"f"},
"":{"v":1}},
  "y":{
"p":"P val",
"q":"Q val",
"":{"v":2}
at 
__randomizedtesting.SeedInfo.seed([A0842F48F8778C56:28D01092568BE1AE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:264)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8777) Duplicate Solr process can cripple a running process

2016-03-03 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178770#comment-15178770
 ] 

Scott Blum commented on SOLR-8777:
--

Not completely related, but it seems like there's a bug in jetty's 
SocketConnector.  It uses the ServerSocket constructor that automatically binds 
the port, then attempts to set setReuseAddress(), which makes no sense.  It 
should use the other constructor, set the reuse_address option, then call 
bind() manually.

In other news, I don't know that there's a way to change Jetty's startup 
sequence.. the best I could do is try to use reflection to pull the connectors 
off the Server and start them early.  But that seems ungood.

I suppose we could spin for a while waiting for the previous ephemeral node to 
disappear, and if it doesn't, error out and refuse to start?

> Duplicate Solr process can cripple a running process
> 
>
> Key: SOLR-8777
> URL: https://issues.apache.org/jira/browse/SOLR-8777
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3.1
>Reporter: Shalin Shekhar Mangar
>
> Thanks to [~mewmewball] for catching this one.
> Accidentally executing the same instance of Solr twice causes the second 
> start instance to die with an "Address already in use", but not before 
> deleting the first instance's live_node entry, emitting "Found a previous 
> node that still exists while trying to register a new live node  - 
> removing existing node to create another".
> The second start instance dies and its ephemeral node is then removed, 
> causing /live_nodes/ to be empty since the first start instance's 
> live_node was deleted by the second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-03 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178765#comment-15178765
 ] 

Robert Muir commented on LUCENE-6993:
-

{quote}
Had issues with TestUAX29URLEmailTokenizer.testLongEMAILatomText taking a 
while, not sure if that's part of the same issue or not.
{quote}

Well this test is already marked {{@Slow}} and just took 41.2s on my machine. 
Were you seeing stuff like that? As far as i know from the original issue, 
there were tests for this bug that would basically never finish at all .

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7913) Add stream.body support to MLT QParser

2016-03-03 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178757#comment-15178757
 ] 

Upayavira commented on SOLR-7913:
-

[~smolloy] Looking at this patch, there's quite a lot of reformatting in it, 
which makes it hard to distinguish between substantive changes and layout 
changes. Could you provide a patch that is purely substantive changes? Whilst I 
won't (yet) promise to commit it, I'd certainly like to see if we can get it to 
that point.

> Add stream.body support to MLT QParser
> --
>
> Key: SOLR-7913
> URL: https://issues.apache.org/jira/browse/SOLR-7913
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
> Attachments: SOLR-7913.patch, SOLR-7913.patch
>
>
> Continuing from 
> https://issues.apache.org/jira/browse/SOLR-7639?focusedCommentId=14601011=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14601011.
> It'd be good to have stream.body be supported by the mlt qparser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7065) Fix explain for global ordinal query time join

2016-03-03 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-7065:
--
Attachment: LUCENE_7065.patch

Patch with the fix and a test.

> Fix explain for global ordinal query time join
> --
>
> Key: LUCENE-7065
> URL: https://issues.apache.org/jira/browse/LUCENE-7065
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Martijn van Groningen
> Attachments: LUCENE_7065.patch
>
>
> The explain methods for the global ordinal join is broken, because even in 
> the case that a document doesn't match with the query it tries to create an 
> explain that tells it does. 
> In the case when score mode 'avg' is used this causes a NPE and in the other 
> cases the return explanation indicates that a document matches while it 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7065) Fix explain for global ordinal query time join

2016-03-03 Thread Martijn van Groningen (JIRA)
Martijn van Groningen created LUCENE-7065:
-

 Summary: Fix explain for global ordinal query time join
 Key: LUCENE-7065
 URL: https://issues.apache.org/jira/browse/LUCENE-7065
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Martijn van Groningen


The explain methods for the global ordinal join is broken, because even in the 
case that a document doesn't match with the query it tries to create an explain 
that tells it does. 

In the case when score mode 'avg' is used this causes a NPE and in the other 
cases the return explanation indicates that a document matches while it doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-03 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated LUCENE-6993:
--
Attachment: LUCENE-6993.patch

New patch, takes care of all 5 generated tokenizers.

This patch is built using jflex 1.6.1 and unicode 7, so that we can at least 
have something in time for 6.0.

I looked at the new generated jflex code and I think it takes care of the 
buffer expansion issue. At the very least, our existing StandardAnalyzer tests 
pass. Still need to have a macro for fixing buffersize, though.

Had issues with TestUAX29URLEmailTokenizer.testLongEMAILatomText taking a 
while, not sure if that's part of the same issue or not.

Also, moved jflex version to the properties file with everything else instead 
of set directly in the build.xml

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2016-03-03 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178727#comment-15178727
 ] 

Shawn Heisey commented on SOLR-8241:


bq. neither Guava or Caffeine bothered to include a {{put}} statistic

For all the Solr cache implementations currently shipping, the put operation is 
pretty much identical, so stats are not likely to be very interesting when 
comparing implementations.  The situation is similar for lookups.  I am not at 
all worried about seeing time stats on either of those, unless it's easy and 
really fast to obtain.

I think that hit ratio is the most important statistic for cache performance, 
and it's already available.  Eviction performance is important, though.  The 
count of evictions, also present currently, is useful.  The speed of evictions, 
in conjunction with the count, can help decide whether the cache is too slow.

If the implementation itself happens to track stats, that's awesome, but I'm 
after more than a calculation of the average time.  Percentile stats give a 
clearer picture of what's happening than plain averages.  I'd love to have the 
same detail on speed data that we got for QTime with SOLR-1972.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8785) Use Metrics library for core metrics

2016-03-03 Thread Jeff Wartes (JIRA)
Jeff Wartes created SOLR-8785:
-

 Summary: Use Metrics library for core metrics
 Key: SOLR-8785
 URL: https://issues.apache.org/jira/browse/SOLR-8785
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.1
Reporter: Jeff Wartes


The Metrics library (https://dropwizard.github.io/metrics/3.1.0/) is a 
well-known way to track metrics about applications. 

In SOLR-1972, latency percentile tracking was added. The comment list is long, 
so here’s my synopsis:

1. An attempt was made to use the Metrics library
2. That attempt failed due to a memory leak in Metrics v2.1.1
3. Large parts of Metrics were then copied wholesale into the 
org.apache.solr.util.stats package space and that was used instead.

Copy/pasting Metrics code into Solr may have been the correct solution at the 
time, but I submit that it isn’t correct any more. 
The leak in Metrics was fixed even before SOLR-1972 was released, and by 
copy/pasting a subset of the functionality, we miss access to other important 
things that the Metrics library provides, particularly the concept of a 
Reporter. (https://dropwizard.github.io/metrics/3.1.0/manual/core/#reporters)

Further, Metrics v3.0.2 is already packaged with Solr anyway, because it’s used 
in two contrib modules. (map-reduce and morphines-core)

I’m proposing that:

1. Metrics as bundled with Solr be upgraded to the current v3.1.2
2. Most of the org.apache.solr.util.stats package space be deleted outright, or 
gutted and replaced with simple calls to Metrics. Due to the copy/paste origin, 
the concepts should mostly map 1:1.

I’d further recommend a usage pattern like:
SharedMetricRegistries.getOrCreate(System.getProperty(“solr.metrics.registry”, 
“solr-registry”))

There are all kinds of areas in Solr that could benefit from metrics tracking 
and reporting. This pattern allows diverse areas of code to track metrics 
within a single, named registry. This well-known-name then becomes a handle you 
can use to easily attach a Reporter and ship all of those metrics off-box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+107) - Build # 6 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/6/
Java: 32bit/jdk-9-ea+107 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection reset

Stack Trace:
java.net.SocketException: Connection reset
at 
__randomizedtesting.SeedInfo.seed([FAA4377049FF33AE:515E2A659623B580]:0)
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:158)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:50)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:195)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178686#comment-15178686
 ] 

ASF subversion and git services commented on SOLR-8725:
---

Commit 2fef533fe97be90ffb41daee83b4d05c88cc3a7a in lucene-solr's branch 
refs/heads/branch_6_0 from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2fef533 ]

SOLR-8725: Fix precommit check


> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178687#comment-15178687
 ] 

ASF subversion and git services commented on SOLR-8423:
---

Commit ca03639e6d5fbae924060fdb0b087189bb65a75d in lucene-solr's branch 
refs/heads/branch_6_0 from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ca03639 ]

SOLR-8423: DeleteShard and DeleteReplica should cleanup instance and data 
directory by default and add support for optionally retaining the directories


> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8423.patch, SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178685#comment-15178685
 ] 

ASF subversion and git services commented on SOLR-8725:
---

Commit 3f15560b519f41eb579d114363a4874aa585b324 in lucene-solr's branch 
refs/heads/branch_6_0 from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3f15560b ]

SOLR-8725: Allow hyphen in shard, collection, core, and alias names but not the 
first char


> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178679#comment-15178679
 ] 

ASF subversion and git services commented on SOLR-8423:
---

Commit 638b145376baea5281273bb90cedd8f69fecfa9f in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=638b145 ]

SOLR-8423: DeleteShard and DeleteReplica should cleanup instance and data 
directory by default and add support for optionally retaining the directories


> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8423.patch, SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16089 - Failure!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16089/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 30 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: 
org.eclipse.jgit.api.errors.TransportException: Connection reset
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:639)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
Caused by: org.eclipse.jgit.api.errors.TransportException: Connection reset
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:139)
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:637)
... 12 more
Caused by: org.eclipse.jgit.errors.TransportException: Connection reset
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:182)
at 
org.eclipse.jgit.transport.TransportGitAnon$TcpFetchConnection.(TransportGitAnon.java:194)
at 
org.eclipse.jgit.transport.TransportGitAnon.openFetch(TransportGitAnon.java:120)
at 
org.eclipse.jgit.transport.FetchProcess.executeImp(FetchProcess.java:136)
at 
org.eclipse.jgit.transport.FetchProcess.execute(FetchProcess.java:122)
at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1138)
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:130)
... 13 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.eclipse.jgit.util.IO.readFully(IO.java:246)
at 
org.eclipse.jgit.transport.PacketLineIn.readLength(PacketLineIn.java:186)
at 
org.eclipse.jgit.transport.PacketLineIn.readString(PacketLineIn.java:138)
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefsImpl(BasePackConnection.java:195)
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:176)
... 19 more
ERROR: null
Retrying after 10 seconds
Fetching changes from the remote Git repository
Cleaning workspace
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: 
org.eclipse.jgit.api.errors.TransportException: Connection reset
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:639)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
Caused by: org.eclipse.jgit.api.errors.TransportException: Connection reset
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:139)
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:637)
... 12 more
Caused by: org.eclipse.jgit.errors.TransportException: Connection reset
 

Re: Dropping branch_5x Jenkins jobs

2016-03-03 Thread Steve Rowe
Thanks Uwe, I’ve created the other 5.5 jobs now too.

--
Steve
www.lucidworks.com

> On Mar 3, 2016, at 4:00 PM, Uwe Schindler  wrote:
> 
> Hi,
> 
> I created the smoker job already. The other ones are not yet there. If you 
> look at the smoker job you see how its done ("Tools Environment" plugin and 
> then Ant's Properties under ANT's "extended").
> 
> The other ones should be easy, just clone 6x job and change branch to 
> "*/branch_5_5" and java version to "latest1.7" (from latest1.8).
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
>> -Original Message-
>> From: Steve Rowe [mailto:sar...@gmail.com]
>> Sent: Thursday, March 03, 2016 5:58 PM
>> To: dev@lucene.apache.org
>> Subject: Re: Dropping branch_5x Jenkins jobs
>> 
>> Oh, thanks Uwe, I hadn’t looked at Jenkins yet today :)
>> 
>> I’ll recreate the 5.5 jobs as you suggest on ASF Jenkins (and disable them),
>> and leave the smoker job fixups to you.
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>>> On Mar 3, 2016, at 11:51 AM, Uwe Schindler  wrote:
>>> 
>>> Hi,
>>> 
>>> Sorry, the 5.x ones are already gone! I touched and cleaned up every
>> Jenkins Job today (trunk -> master). I also added the 6.x Jobs.
>>> To recreate the 5.5 Jobs you have to clone a 6.x Job, but don't forget to
>> change the JVM from "latest1.8" to "latest1.7". The Smoker Job is more
>> complicated, because it checks both JVMs, but I can set that up again - there
>> is a plugin for that.
>>> 
>>> The 5.5 jobs are still running on Policeman Jenkins, so it is not urgent.
>>> 
>>> Uwe
>>> 
>>> -
>>> Uwe Schindler
>>> H.-H.-Meier-Allee 63, D-28213 Bremen
>>> http://www.thetaphi.de
>>> eMail: u...@thetaphi.de
>>> 
>>> 
 -Original Message-
 From: Steve Rowe [mailto:sar...@gmail.com]
 Sent: Thursday, March 03, 2016 5:39 PM
 To: Lucene Dev 
 Subject: Dropping branch_5x Jenkins jobs
 
 Assuming there won’t be a 5.6 release, we should drop the 5.x jobs on
 Jenkins, and disable but keep around the 5.5 jobs, to be used for a 5.5.1
 release.
 
 FYI, I removed the 5.5 jobs from ASF Jenkins a couple days ago
>> (accidentally
 for the first one, intending to just disable, but after I removed the first
>> one I
 just continued down that path…), but these are easy enough to clone
>> from
 the existing branch_5x jobs.
 
 Thoughts?
 
 --
 Steve
 www.lucidworks.com
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [2/2] lucene-solr git commit: SOLR-8725: Fix precommit check

2016-03-03 Thread Anshum Gupta
fixed

On Thu, Mar 3, 2016 at 1:23 PM,  wrote:

> SOLR-8725: Fix precommit check
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/73d2d112
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/73d2d112
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/73d2d112
>
> Branch: refs/heads/branch_6x
> Commit: 73d2d1125f43d6f482b0be7ecfccd673d1fe6d41
> Parents: 7e59ba4
> Author: anshum 
> Authored: Thu Mar 3 13:03:31 2016 -0800
> Committer: anshum 
> Committed: Thu Mar 3 13:22:42 2016 -0800
>
> --
>  .../solr/client/solrj/util/SolrIdentifierValidator.java  | 11 ++-
>  1 file changed, 6 insertions(+), 5 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73d2d112/solr/solrj/src/java/org/apache/solr/client/solrj/util/SolrIdentifierValidator.java
> --
> diff --git
> a/solr/solrj/src/java/org/apache/solr/client/solrj/util/SolrIdentifierValidator.java
> b/solr/solrj/src/java/org/apache/solr/client/solrj/util/SolrIdentifierValidator.java
> index 2b1f3b5..449c621 100644
> ---
> a/solr/solrj/src/java/org/apache/solr/client/solrj/util/SolrIdentifierValidator.java
> +++
> b/solr/solrj/src/java/org/apache/solr/client/solrj/util/SolrIdentifierValidator.java
> @@ -1,7 +1,3 @@
> -package org.apache.solr.client.solrj.util;
> -
> -import java.util.regex.Pattern;
> -
>  /*
>   * Licensed to the Apache Software Foundation (ASF) under one or more
>   * contributor license agreements.  See the NOTICE file distributed with
> @@ -18,6 +14,10 @@ import java.util.regex.Pattern;
>   * See the License for the specific language governing permissions and
>   * limitations under the License.
>   */
> +package org.apache.solr.client.solrj.util;
> +
> +import java.util.Locale;
> +import java.util.regex.Pattern;
>
>  /**
>   * Ensures that provided identifiers align with Solr's
> recommendations/requirements for choosing
> @@ -52,7 +52,8 @@ public class SolrIdentifierValidator {
>}
>
>public static String getIdentifierMessage(IdentifierType
> identifierType, String name) {
> -  return "Invalid " + identifierType.toString().toLowerCase() + ": "
> + name + ". " + identifierType.toString().toLowerCase()
> +  return "Invalid " +
> identifierType.toString().toLowerCase(Locale.ROOT) + ": " + name + ". "
> +  + identifierType.toString().toLowerCase(Locale.ROOT)
>+ " names must consist entirely of periods, underscores,
> hyphens, and alphanumerics";
>
>}
>
>


[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178646#comment-15178646
 ] 

ASF subversion and git services commented on SOLR-8725:
---

Commit 73d2d1125f43d6f482b0be7ecfccd673d1fe6d41 in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73d2d11 ]

SOLR-8725: Fix precommit check


> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178645#comment-15178645
 ] 

ASF subversion and git services commented on SOLR-8725:
---

Commit 7e59ba4220d836d205f454a35a9c8ae192c28a26 in lucene-solr's branch 
refs/heads/branch_6x from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7e59ba4 ]

SOLR-8725: Allow hyphen in shard, collection, core, and alias names but not the 
first char


> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6568) Join Discovery Contrib

2016-03-03 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178638#comment-15178638
 ] 

Joel Bernstein commented on SOLR-6568:
--

Yes, the distributed joins in the Streaming API have superseded this ticket. 

> Join Discovery Contrib
> --
>
> Key: SOLR-6568
> URL: https://issues.apache.org/jira/browse/SOLR-6568
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0
>
>
> This contribution was commissioned by the *NCBI* (National Center for 
> Biotechnology Information). 
> The Join Discovery Contrib is a set of Solr plugins that support large scale 
> joins and "join facets" between Solr cores. 
> There are two different Join implementations included in this contribution. 
> Both implementations are designed to work with integer join keys. It is very 
> common in large BioInformatic and Genomic databases to use integer primary 
> and foreign keys. Integer keys allow Bioinformatic and Genomic search engines 
> and discovery tools to perform complex operations on large data sets very 
> efficiently. 
> The Join Discovery Contrib provides features that will be applicable to 
> anyone working with the freely available databases from the NCBI and likely a 
> large number of other BioInformatic and Genomic databases. These features are 
> not specific though to Bioinformatics and Genomics, they can be used in any 
> datasets where integer keys are used to define the primary and foreign keys.
> What is included in this contrib:
> 1) A new JoinComponent. This component is used instead of the standard 
> QueryComponent. It facilitates very large scale relational joins between two 
> Solr indexes (cores). The join algorithm used in this component is known as a 
> *parallel partitioned merge join*. This is an algorithm which partitions the 
> results from both sides of the join and then sorts and merges the partitions 
> in parallel. 
>  Below are some of it's features:
> * Sub-second performance on very large joins. The parallel join algorithm is 
> capable of sub-second performance on joins with tens of millions of records 
> on both sides of the join.
> * The JoinComponent returns "tuples" with fields from both sides of the join. 
> The initial release returns the primary keys from both sides of the join and 
> the join key. 
> * The tuples also include, and are ranked by, a combined score from both 
> sides of the join.
> * Special purpose memory-mapped on-disk indexes to support \*:\* joins. This 
> makes it possible to join an entire index with a sub-set of another index 
> with sub-second performance. 
> * Support for very fast one-to-one, one-to-many and many-to-many joins. Fast 
> many-to-many joins make it possible to join between indexes on multi-value 
> fields. 
> 2) A new JoinFacetComponent. This component provides facets for both indexes 
> involved in the join. 
> 3) The BitSetJoinQParserPlugin. A very fast parallel filter join based on 
> bitsets that supports infinite levels of nesting. It can be used as a filter 
> query in combination with the JoinComponent or with the standard query 
> component. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178636#comment-15178636
 ] 

ASF subversion and git services commented on SOLR-8725:
---

Commit 7daad8d7d17b429adbd6cf61474a81b7c7bdf9c9 in lucene-solr's branch 
refs/heads/master from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7daad8d ]

SOLR-8725: Fix precommit check


> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr git commit: SOLR-8725: Allow hyphen in shard, collection, core, and alias names but not the first char

2016-03-03 Thread Anshum Gupta
This broke pre-commit, I'll commit a fix.

On Thu, Mar 3, 2016 at 10:53 AM,  wrote:

> Repository: lucene-solr
> Updated Branches:
>   refs/heads/master a079ff252 -> 6de2b7dbd
>
>
> SOLR-8725: Allow hyphen in shard, collection, core, and alias names but
> not the first char
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/6de2b7db
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/6de2b7db
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/6de2b7db
>
> Branch: refs/heads/master
> Commit: 6de2b7dbd17fc70fc9b2b053fe2628534116309b
> Parents: a079ff2
> Author: anshum 
> Authored: Wed Mar 2 16:18:42 2016 -0800
> Committer: anshum 
> Committed: Thu Mar 3 10:04:07 2016 -0800
>
> --
>  solr/CHANGES.txt|  2 ++
>  .../org/apache/solr/core/CoreContainer.java |  9 -
>  .../solr/handler/admin/CollectionsHandler.java  | 17 +++--
>  .../apache/solr/cloud/TestCollectionAPI.java|  8 
>  .../solrj/request/CollectionAdminRequest.java   | 20 ++--
>  .../client/solrj/request/CoreAdminRequest.java  |  8 
>  .../solrj/util/SolrIdentifierValidator.java | 16 +---
>  .../request/TestCollectionAdminRequest.java |  8 
>  .../client/solrj/request/TestCoreAdmin.java |  4 ++--
>  9 files changed, 50 insertions(+), 42 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6de2b7db/solr/CHANGES.txt
> --
> diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
> index a96282b..66e6bb0 100644
> --- a/solr/CHANGES.txt
> +++ b/solr/CHANGES.txt
> @@ -382,6 +382,8 @@ Other Changes
>
>  * SOLR-8778: Deprecate CSVStrategy's setters, and make its pre-configured
> strategies immutable. (Steve Rowe)
>
> +* SOLR-8725: Allow hyphen in collection, core, shard, and alias name as
> the non-first character (Anshum Gupta)
> +
>  ==  5.5.1 ==
>
>  Bug Fixes
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6de2b7db/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> --
> diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> index 7a55e05..9ff45ea 100644
> --- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> +++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> @@ -805,8 +805,7 @@ public class CoreContainer {
>  try {
>MDCLoggingContext.setCore(core);
>if (!SolrIdentifierValidator.validateCoreName(dcore.getName())) {
> -throw new SolrException(ErrorCode.BAD_REQUEST, "Invalid core: " +
> dcore.getName()
> -+ ". Core names must consist entirely of periods,
> underscores, and alphanumerics");
> +throw new SolrException(ErrorCode.BAD_REQUEST,
> SolrIdentifierValidator.getIdentifierMessage(SolrIdentifierValidator.IdentifierType.CORE,
> dcore.getName()));
>}
>if (zkSys.getZkController() != null) {
>  zkSys.getZkController().preRegister(dcore);
> @@ -1010,9 +1009,9 @@ public class CoreContainer {
>}
>
>public void rename(String name, String toName) {
> -if(!SolrIdentifierValidator.validateCoreName(toName)) {
> -  throw new SolrException(ErrorCode.BAD_REQUEST, "Invalid core: " +
> toName
> -  + ". Core names must consist entirely of periods, underscores,
> and alphanumerics");
> +if (!SolrIdentifierValidator.validateCoreName(toName)) {
> +  throw new SolrException(ErrorCode.BAD_REQUEST,
> SolrIdentifierValidator.getIdentifierMessage(SolrIdentifierValidator.IdentifierType.CORE,
> +  toName));
>  }
>  try (SolrCore core = getCore(name)) {
>if (core != null) {
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6de2b7db/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
> --
> diff --git
> a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
> b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
> index c81e183..ce4eab2 100644
> ---
> a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
> +++
> b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java
> @@ -347,14 +347,12 @@ public class CollectionsHandler extends
> RequestHandlerBase {
>  verifyRuleParams(h.coreContainer, props);
>  final String collectionName = (String) props.get(NAME);
>  if
> 

RE: Dropping branch_5x Jenkins jobs

2016-03-03 Thread Uwe Schindler
Hi,

I created the smoker job already. The other ones are not yet there. If you look 
at the smoker job you see how its done ("Tools Environment" plugin and then 
Ant's Properties under ANT's "extended").

The other ones should be easy, just clone 6x job and change branch to 
"*/branch_5_5" and java version to "latest1.7" (from latest1.8).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Steve Rowe [mailto:sar...@gmail.com]
> Sent: Thursday, March 03, 2016 5:58 PM
> To: dev@lucene.apache.org
> Subject: Re: Dropping branch_5x Jenkins jobs
> 
> Oh, thanks Uwe, I hadn’t looked at Jenkins yet today :)
> 
> I’ll recreate the 5.5 jobs as you suggest on ASF Jenkins (and disable them),
> and leave the smoker job fixups to you.
> 
> --
> Steve
> www.lucidworks.com
> 
> > On Mar 3, 2016, at 11:51 AM, Uwe Schindler  wrote:
> >
> > Hi,
> >
> > Sorry, the 5.x ones are already gone! I touched and cleaned up every
> Jenkins Job today (trunk -> master). I also added the 6.x Jobs.
> > To recreate the 5.5 Jobs you have to clone a 6.x Job, but don't forget to
> change the JVM from "latest1.8" to "latest1.7". The Smoker Job is more
> complicated, because it checks both JVMs, but I can set that up again - there
> is a plugin for that.
> >
> > The 5.5 jobs are still running on Policeman Jenkins, so it is not urgent.
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> >> -Original Message-
> >> From: Steve Rowe [mailto:sar...@gmail.com]
> >> Sent: Thursday, March 03, 2016 5:39 PM
> >> To: Lucene Dev 
> >> Subject: Dropping branch_5x Jenkins jobs
> >>
> >> Assuming there won’t be a 5.6 release, we should drop the 5.x jobs on
> >> Jenkins, and disable but keep around the 5.5 jobs, to be used for a 5.5.1
> >> release.
> >>
> >> FYI, I removed the 5.5 jobs from ASF Jenkins a couple days ago
> (accidentally
> >> for the first one, intending to just disable, but after I removed the first
> one I
> >> just continued down that path…), but these are easy enough to clone
> from
> >> the existing branch_5x jobs.
> >>
> >> Thoughts?
> >>
> >> --
> >> Steve
> >> www.lucidworks.com
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8784) Add support for CloudSolrClient (zkHost) to SolrEntityProcessor

2016-03-03 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-8784:
---
Attachment: SOLR-8784.patch

Patch adding zkHost and collection parameters to SolrEntityProcessor.  The 
collection parameter can also work in conjunction with the url parameter, if 
the URL is properly formed and ends with the context path (usually /solr).

> Add support for CloudSolrClient (zkHost) to SolrEntityProcessor
> ---
>
> Key: SOLR-8784
> URL: https://issues.apache.org/jira/browse/SOLR-8784
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 5.5
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-8784.patch
>
>
> SolrEntityProcessor should provide a mechanism to connect to a full cloud 
> install as well as a single Solr server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8784) Add support for CloudSolrClient (zkHost) to SolrEntityProcessor

2016-03-03 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-8784:
--

 Summary: Add support for CloudSolrClient (zkHost) to 
SolrEntityProcessor
 Key: SOLR-8784
 URL: https://issues.apache.org/jira/browse/SOLR-8784
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 5.5
Reporter: Shawn Heisey
Priority: Minor


SolrEntityProcessor should provide a mechanism to connect to a full cloud 
install as well as a single Solr server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8778) Deprecate CSVStrategy's setters, and make its pre-configured strategies immutable

2016-03-03 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178593#comment-15178593
 ] 

Uwe Schindler commented on SOLR-8778:
-

bq. I'll hold off creating a JIRA to remove the deprecated stuff in master, 
since as Uwe says we should instead remove the whole fork.

We should investigate this. Issue with patch is SOLR-3213, but I gave up 
because to fix this you have to understand the whole CSV handler code (and I 
was not familar to it when I tried to do this).

> Deprecate CSVStrategy's setters, and make its pre-configured strategies 
> immutable
> -
>
> Key: SOLR-8778
> URL: https://issues.apache.org/jira/browse/SOLR-8778
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-8778.patch
>
>
> Removing some deprecated things in CSVStrategy (SOLR-8764) exposed a bug: 
> it's possible to redefine the public static 
> {{CSVStrategy.\{DEFAULT,EXCEL,TDF}_STRATEGY}} strategies, simply by calling 
> their setters.
> Right now that's happening in {{CSVParserTest.testUnicodeEscape()}}, where 
> the default unicode escape interpretation is changed from false to true.  And 
> then if that test happens to run before 
> {{CSVStrategyTest.testSetCSVStrategy()}}, which tests that the unicode escape 
> interpretation on the default strategy is set to false, then the latter will 
> fail.
> Example failures: 
> http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/16079/ and 
> http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/3126/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: lucene-solr git commit: Fix TestBackwardsCompatibility.testAllVersionsTested to consider 5.5 as an expected version

2016-03-03 Thread Uwe Schindler
Hi,

This may also need to be backported to 6.0 branch. We just have no jenkins jobs 
up to now, so this may not have been detected!?
I will try this out later and commit if needed.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: sha...@apache.org [mailto:sha...@apache.org]
> Sent: Thursday, March 03, 2016 7:31 PM
> To: comm...@lucene.apache.org
> Subject: lucene-solr git commit: Fix
> TestBackwardsCompatibility.testAllVersionsTested to consider 5.5 as an
> expected version
> 
> Repository: lucene-solr
> Updated Branches:
>   refs/heads/branch_6x e344ab1d0 -> 89a02361f
> 
> 
> Fix TestBackwardsCompatibility.testAllVersionsTested to consider 5.5 as an
> expected version
> 
> 
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-
> solr/commit/89a02361
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/89a02361
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/89a02361
> 
> Branch: refs/heads/branch_6x
> Commit: 89a02361fe5eb6c4ae84a1bf773b72658ecf2f44
> Parents: e344ab1
> Author: Shalin Shekhar Mangar 
> Authored: Fri Mar 4 00:00:37 2016 +0530
> Committer: Shalin Shekhar Mangar 
> Committed: Fri Mar 4 00:00:37 2016 +0530
> 
> --
>  .../lucene/index/TestBackwardsCompatibility.java  | 18 --
>  1 file changed, 18 deletions(-)
> --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-
> solr/blob/89a02361/lucene/backward-
> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
> --
> diff --git a/lucene/backward-
> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
> b/lucene/backward-
> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
> index 57b8d2d..f8956d5 100644
> --- a/lucene/backward-
> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
> +++ b/lucene/backward-
> codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
> @@ -416,24 +416,6 @@ public class TestBackwardsCompatibility extends
> LuceneTestCase {
>}
>  }
> 
> -// BEGIN TRUNK ONLY BLOCK
> -// on trunk, the last release of the prev major release is also untested
> -Version lastPrevMajorVersion = null;
> -for (java.lang.reflect.Field field : Version.class.getDeclaredFields()) {
> -  if (Modifier.isStatic(field.getModifiers()) && field.getType() ==
> Version.class) {
> -Version v = (Version)field.get(Version.class);
> -Matcher constant = constantPattern.matcher(field.getName());
> -if (constant.matches() == false) continue;
> -if (v.major == Version.LATEST.major - 1 &&
> -(lastPrevMajorVersion == null || 
> v.onOrAfter(lastPrevMajorVersion)))
> {
> -  lastPrevMajorVersion = v;
> -}
> -  }
> -}
> -assertNotNull(lastPrevMajorVersion);
> -expectedVersions.remove(lastPrevMajorVersion.toString() + "-cfs");
> -// END TRUNK ONLY BLOCK
> -
>  Collections.sort(expectedVersions);
> 
>  // find what versions we are testing


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4164) Result Grouping fails if no hits

2016-03-03 Thread Webster Homer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178532#comment-15178532
 ] 

Webster Homer commented on SOLR-4164:
-

Found that if you set group.limit to -1 it will give the same failure. We had 
code that did this with the intent of getting all the documents in the roll up. 
We limit it to 500 so setting group.limit=500 was a decent work around. Still 
this worked fine with normal solr, only solrcloud had a problem

> Result Grouping fails if no hits
> 
>
> Key: SOLR-4164
> URL: https://issues.apache.org/jira/browse/SOLR-4164
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other, SolrCloud
>Affects Versions: 4.0
>Reporter: Lance Norskog
>
> In SolrCloud, found a result grouping bug in the 4.0 release.
> A distributed result grouping request under SolrCloud got this result:
> {noformat}
> Dec 10, 2012 10:32:07 PM org.apache.solr.common.SolrException log
> SEVERE: null:java.lang.IllegalArgumentException: numHits must be > 0; please 
> use TotalHitCountCollector if you just need the total hit count
> at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1120)
> at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:1069)
> at 
> org.apache.lucene.search.grouping.AbstractSecondPassGroupingCollector.(AbstractSecondPassGroupingCollector.java:75)
> at 
> org.apache.lucene.search.grouping.term.TermSecondPassGroupingCollector.(TermSecondPassGroupingCollector.java:49)
> at 
> org.apache.solr.search.grouping.distributed.command.TopGroupsFieldCommand.create(TopGroupsFieldCommand.java:128)
> at 
> org.apache.solr.search.grouping.CommandHandler.execute(CommandHandler.java:132)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:339)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178507#comment-15178507
 ] 

ASF subversion and git services commented on SOLR-8423:
---

Commit 9c777ab5adfd07e49310a5fb091d8bac611ef0ba in lucene-solr's branch 
refs/heads/master from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c777ab ]

SOLR-8423: DeleteShard and DeleteReplica should cleanup instance and data 
directory by default and add support for optionally retaining the directories


> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8423.patch, SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_72) - Build # 5664 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5664/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.core.TestLazyCores.testLazyLoad

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2C91FEFC0E8FA5C7]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([2C91FEFC0E8FA5C7]:0)




Build Log:
[...truncated 12488 lines...]
   [junit4] Suite: org.apache.solr.core.TestLazyCores
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestLazyCores_2C91FEFC0E8FA5C7-001\init-core-data-001
   [junit4]   2> 2263360 INFO  
(SUITE-TestLazyCores-seed#[2C91FEFC0E8FA5C7]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true)
   [junit4]   2> 2263361 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.SolrTestCaseJ4 ###Starting testBadConfigsGenerateErrors
   [junit4]   2> 2263446 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestLazyCores_2C91FEFC0E8FA5C7-001\tempDir-001'
   [junit4]   2> 2263446 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 2263446 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.SolrResourceLoader solr home defaulted to 'solr/' (could not find 
system property or JNDI)
   [junit4]   2> 2263454 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CorePropertiesLocator Config-defined core root directory: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestLazyCores_2C91FEFC0E8FA5C7-001\tempDir-001
   [junit4]   2> 2263454 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CoreContainer New CoreContainer 1810349193
   [junit4]   2> 2263454 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestLazyCores_2C91FEFC0E8FA5C7-001\tempDir-001]
   [junit4]   2> 2263454 WARN  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CoreContainer Couldn't add files from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestLazyCores_2C91FEFC0E8FA5C7-001\tempDir-001\lib
 to classpath: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestLazyCores_2C91FEFC0E8FA5C7-001\tempDir-001\lib
   [junit4]   2> 2263455 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.h.c.HttpShardHandlerFactory created with socketTimeout : 
60,connTimeout : 6,maxConnectionsPerHost : 20,maxConnections : 
1,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 
5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
   [junit4]   2> 2263457 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with 
params: socketTimeout=60=6=true
   [junit4]   2> 2263458 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.l.LogWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 2263458 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.l.LogWatcher Registering Log Listener [Log4j 
(org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 2263458 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CoreContainer Security conf doesn't exist. Skipping setup for 
authorization module.
   [junit4]   2> 2263458 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CoreContainer No authentication plugin used.
   [junit4]   2> 2263459 INFO  
(TEST-TestLazyCores.testBadConfigsGenerateErrors-seed#[2C91FEFC0E8FA5C7]) [
] o.a.s.c.CorePropertiesLocator Looking for core definitions underneath 

[jira] [Commented] (SOLR-7010) Remove facet.date client functionality from master

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178475#comment-15178475
 ] 

ASF subversion and git services commented on SOLR-7010:
---

Commit 79a7008b7206123821367e78b95ff2b86ae308a3 in lucene-solr's branch 
refs/heads/branch_6_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=79a7008 ]

SOLR-7010: Remove facet.date client functionality


> Remove facet.date client functionality from master
> --
>
> Key: SOLR-7010
> URL: https://issues.apache.org/jira/browse/SOLR-7010
> Project: Solr
>  Issue Type: Task
>Reporter: Alan Woodward
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-7010.patch, SOLR-7010.patch
>
>
> See comments in SOLR-6976.  We should log a warning when a client using 5.x 
> includes a facet.date param (both in the solr log, and in the response), and 
> remove the functionality entirely from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7010) Remove facet.date client functionality from master

2016-03-03 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-7010.
--
Resolution: Fixed

> Remove facet.date client functionality from master
> --
>
> Key: SOLR-7010
> URL: https://issues.apache.org/jira/browse/SOLR-7010
> Project: Solr
>  Issue Type: Task
>Reporter: Alan Woodward
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-7010.patch, SOLR-7010.patch
>
>
> See comments in SOLR-6976.  We should log a warning when a client using 5.x 
> includes a facet.date param (both in the solr log, and in the response), and 
> remove the functionality entirely from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7010) Remove facet.date client functionality from master

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178468#comment-15178468
 ] 

ASF subversion and git services commented on SOLR-7010:
---

Commit d0d75c448ed78b928af3c8f20ded210ce102d4d4 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d0d75c4 ]

SOLR-7010: Remove facet.date client functionality


> Remove facet.date client functionality from master
> --
>
> Key: SOLR-7010
> URL: https://issues.apache.org/jira/browse/SOLR-7010
> Project: Solr
>  Issue Type: Task
>Reporter: Alan Woodward
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-7010.patch, SOLR-7010.patch
>
>
> See comments in SOLR-6976.  We should log a warning when a client using 5.x 
> includes a facet.date param (both in the solr log, and in the response), and 
> remove the functionality entirely from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 1 - Failure!

2016-03-03 Thread Michael McCandless
Thanks for fixing this Shalin, the test passes for me now!

> assert is40Index; // NOTE: currently we can only do this on trunk!

LOL how confusing.  I suspect the comment is (very) stale, and what
the comment really means is "NOTE: we can only do this for all indices
>= 4.0", i.e. when doc values were added.  Maybe simply remove the
comment?  The comments above (// true if this is a 4.0+ index) is
correct.

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7010) Remove facet.date client functionality from master

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178451#comment-15178451
 ] 

ASF subversion and git services commented on SOLR-7010:
---

Commit d0279b8d5f6c2b4a640ac4738d63a1da0cd79005 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d0279b8 ]

SOLR-7010: Remove facet.date client functionality


> Remove facet.date client functionality from master
> --
>
> Key: SOLR-7010
> URL: https://issues.apache.org/jira/browse/SOLR-7010
> Project: Solr
>  Issue Type: Task
>Reporter: Alan Woodward
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-7010.patch, SOLR-7010.patch
>
>
> See comments in SOLR-6976.  We should log a warning when a client using 5.x 
> includes a facet.date param (both in the solr log, and in the response), and 
> remove the functionality entirely from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 943 - Failure

2016-03-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/943/

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([8939859171C5B300:7E4A6BC9B72D1CE6]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1244)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11247 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

[jira] [Resolved] (SOLR-8764) Remove all deprecated methods and classes from master prior to the 6.0 release

2016-03-03 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-8764.
--
Resolution: Fixed

> Remove all deprecated methods and classes from master prior to the 6.0 release
> --
>
> Key: SOLR-8764
> URL: https://issues.apache.org/jira/browse/SOLR-8764
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: 6.0
>
> Attachments: SOLR-8764.patch, SOLR-8764.patch
>
>
> Code marked as deprecated with {{@Deprecated}} and/or {{@deprecated}} should 
> be removed from master, unless it's being used internally, or the annotations 
> are there as markers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Anshum Gupta
Sure, I will. Thanks !

On Thu, Mar 3, 2016 at 11:20 AM, Nicholas Knize  wrote:

> I quickly skimmed the patches. I'm OK with them being backported to 6.0.
> Can you mark the Fix Version/s accordingly?
>
> Thanks!
>
> On Thu, Mar 3, 2016 at 1:11 PM, Anshum Gupta 
> wrote:
>
>> The releases are demanding, specially major versions, so thanks for all
>> the effort Nick.
>>
>> I would like to commit SOLR-8423 and SOLR-8725 to 6.0. They aren't
>> blockers but are bugs and the patch for both are ready.
>>
>> If you are fine with it, I'll commit to 6.0 else, I'd push it out with
>> 6.1. SOLR-8725 is certainly something that I'd push out with 5.5.1 (and if
>> 5.6 happens, with 5.6).
>>
>> On Thu, Mar 3, 2016 at 10:28 AM, Nicholas Knize  wrote:
>>
>>> > hours (acceptable), not days (unacceptable).
>>>
>>> ++ I definitely agree with this. And it looks like the time period here
>>> was less than a day?
>>>
>>> >  there were multiple questions about it from more than one person
>>> over a couple days
>>>
>>> ?? I do not see these questions? They're certainly not in this thread
>>> which is where all of the branching was being discussed. If there are
>>> separate conversation threads then I think as the RM I should know about
>>> them?
>>>
>>> > If you’re going to be AFK for extended periods, please let people
>>> know.
>>>
>>> ++ This is definitely important. I'm not sure I agree that < 24 hours
>>> constitutes an extended period in this case. Especially given that its the
>>> first major release on the git infrastructure?
>>>
>>> Regardless, thank you to everyone that helped settle these branches.
>>>
>>> - Nick
>>>
>>>
>>>
>>> On Thu, Mar 3, 2016 at 12:09 PM, Steve Rowe  wrote:
>>>
 First, Nick, thanks for your RM work.

 > On Mar 3, 2016, at 12:53 PM, Nicholas Knize  wrote:

 > > The mistake was to freeze the 6x branch in the first place. The
 release branch is the one which should be frozen.
 >
 >  I certainly agree with this. However, over a week ago there was a
 request to hold off on creating the 6_0 branch until Jenkins settled with a
 6x. I received no push back on this suggestion so this was the plan that
 was executed (several days after that request was sent).

 I guess I took this as meaning a freeze on *branch_6x* of hours
 (acceptable), not days (unacceptable).

 > I think Mike is suggesting, and I agree with this, there needs to be
 a reasonable amount of time given for someone to respond.


 My impression was that you were intentionally ignoring questions about
 creation of the 6.0 branch, since there were multiple questions about it
 from more than one person over a couple days with no response from you, but
 meanwhile, you responded on other threads.  (Sorry, I haven’t gone back and
 found the exact messages that left me with this impression, so I guess I
 could be wrong.)

 One of the RM’s most important responsibilities is timely
 communication.  If you’re going to be AFK for extended periods, please let
 people know.

 --
 Steve
 www.lucidworks.com


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>


-- 
Anshum Gupta


[jira] [Updated] (SOLR-8423) DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA

2016-03-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8423:
---
Fix Version/s: 6.0
   master

> DELETESHARD should cleanup the instance and data directory, like DELETEREPLICA
> --
>
> Key: SOLR-8423
> URL: https://issues.apache.org/jira/browse/SOLR-8423
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8423.patch, SOLR-8423.patch, SOLR-8423.patch
>
>
> DELETESHARD only cleans up the index directory and not the instance/data 
> directory. DELETEREPLICA on the other hand cleans up the data and instance 
> directory.
> DELETESHARD should clean up the instance and data directory, so that we don't 
> leak disk space on executing the command.
> If we think this would break back-compat, though I don't see why this should 
> not clean up the instance dir, we should at least provide an option to clean 
> up everything and make it default in 6.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8725:
---
Fix Version/s: 6.0
   master

> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0
>
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8778) Deprecate CSVStrategy's setters, and make its pre-configured strategies immutable

2016-03-03 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-8778.
--
   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 6.0
   master

Resolving as fixed.  I'll hold off creating a JIRA to remove the deprecated 
stuff in master, since as Uwe says we should instead remove the whole fork.

> Deprecate CSVStrategy's setters, and make its pre-configured strategies 
> immutable
> -
>
> Key: SOLR-8778
> URL: https://issues.apache.org/jira/browse/SOLR-8778
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-8778.patch
>
>
> Removing some deprecated things in CSVStrategy (SOLR-8764) exposed a bug: 
> it's possible to redefine the public static 
> {{CSVStrategy.\{DEFAULT,EXCEL,TDF}_STRATEGY}} strategies, simply by calling 
> their setters.
> Right now that's happening in {{CSVParserTest.testUnicodeEscape()}}, where 
> the default unicode escape interpretation is changed from false to true.  And 
> then if that test happens to run before 
> {{CSVStrategyTest.testSetCSVStrategy()}}, which tests that the unicode escape 
> interpretation on the default strategy is set to false, then the latter will 
> fail.
> Example failures: 
> http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/16079/ and 
> http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/3126/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Nicholas Knize
I quickly skimmed the patches. I'm OK with them being backported to 6.0.
Can you mark the Fix Version/s accordingly?

Thanks!

On Thu, Mar 3, 2016 at 1:11 PM, Anshum Gupta  wrote:

> The releases are demanding, specially major versions, so thanks for all
> the effort Nick.
>
> I would like to commit SOLR-8423 and SOLR-8725 to 6.0. They aren't
> blockers but are bugs and the patch for both are ready.
>
> If you are fine with it, I'll commit to 6.0 else, I'd push it out with
> 6.1. SOLR-8725 is certainly something that I'd push out with 5.5.1 (and if
> 5.6 happens, with 5.6).
>
> On Thu, Mar 3, 2016 at 10:28 AM, Nicholas Knize  wrote:
>
>> > hours (acceptable), not days (unacceptable).
>>
>> ++ I definitely agree with this. And it looks like the time period here
>> was less than a day?
>>
>> >  there were multiple questions about it from more than one person over
>> a couple days
>>
>> ?? I do not see these questions? They're certainly not in this thread
>> which is where all of the branching was being discussed. If there are
>> separate conversation threads then I think as the RM I should know about
>> them?
>>
>> > If you’re going to be AFK for extended periods, please let people know.
>>
>> ++ This is definitely important. I'm not sure I agree that < 24 hours
>> constitutes an extended period in this case. Especially given that its the
>> first major release on the git infrastructure?
>>
>> Regardless, thank you to everyone that helped settle these branches.
>>
>> - Nick
>>
>>
>>
>> On Thu, Mar 3, 2016 at 12:09 PM, Steve Rowe  wrote:
>>
>>> First, Nick, thanks for your RM work.
>>>
>>> > On Mar 3, 2016, at 12:53 PM, Nicholas Knize  wrote:
>>>
>>> > > The mistake was to freeze the 6x branch in the first place. The
>>> release branch is the one which should be frozen.
>>> >
>>> >  I certainly agree with this. However, over a week ago there was a
>>> request to hold off on creating the 6_0 branch until Jenkins settled with a
>>> 6x. I received no push back on this suggestion so this was the plan that
>>> was executed (several days after that request was sent).
>>>
>>> I guess I took this as meaning a freeze on *branch_6x* of hours
>>> (acceptable), not days (unacceptable).
>>>
>>> > I think Mike is suggesting, and I agree with this, there needs to be a
>>> reasonable amount of time given for someone to respond.
>>>
>>>
>>> My impression was that you were intentionally ignoring questions about
>>> creation of the 6.0 branch, since there were multiple questions about it
>>> from more than one person over a couple days with no response from you, but
>>> meanwhile, you responded on other threads.  (Sorry, I haven’t gone back and
>>> found the exact messages that left me with this impression, so I guess I
>>> could be wrong.)
>>>
>>> One of the RM’s most important responsibilities is timely
>>> communication.  If you’re going to be AFK for extended periods, please let
>>> people know.
>>>
>>> --
>>> Steve
>>> www.lucidworks.com
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>
>
> --
> Anshum Gupta
>


[jira] [Updated] (SOLR-7010) Remove facet.date client functionality from master

2016-03-03 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7010:
-
Summary: Remove facet.date client functionality from master  (was: 
Deprecate facet.date client functionality, and remove from master)

> Remove facet.date client functionality from master
> --
>
> Key: SOLR-7010
> URL: https://issues.apache.org/jira/browse/SOLR-7010
> Project: Solr
>  Issue Type: Task
>Reporter: Alan Woodward
>Assignee: Steve Rowe
> Fix For: master, 6.0
>
> Attachments: SOLR-7010.patch, SOLR-7010.patch
>
>
> See comments in SOLR-6976.  We should log a warning when a client using 5.x 
> includes a facet.date param (both in the solr log, and in the response), and 
> remove the functionality entirely from trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Anshum Gupta
The releases are demanding, specially major versions, so thanks for all the
effort Nick.

I would like to commit SOLR-8423 and SOLR-8725 to 6.0. They aren't blockers
but are bugs and the patch for both are ready.

If you are fine with it, I'll commit to 6.0 else, I'd push it out with 6.1.
SOLR-8725 is certainly something that I'd push out with 5.5.1 (and if 5.6
happens, with 5.6).

On Thu, Mar 3, 2016 at 10:28 AM, Nicholas Knize  wrote:

> > hours (acceptable), not days (unacceptable).
>
> ++ I definitely agree with this. And it looks like the time period here
> was less than a day?
>
> >  there were multiple questions about it from more than one person over
> a couple days
>
> ?? I do not see these questions? They're certainly not in this thread
> which is where all of the branching was being discussed. If there are
> separate conversation threads then I think as the RM I should know about
> them?
>
> > If you’re going to be AFK for extended periods, please let people know.
>
> ++ This is definitely important. I'm not sure I agree that < 24 hours
> constitutes an extended period in this case. Especially given that its the
> first major release on the git infrastructure?
>
> Regardless, thank you to everyone that helped settle these branches.
>
> - Nick
>
>
>
> On Thu, Mar 3, 2016 at 12:09 PM, Steve Rowe  wrote:
>
>> First, Nick, thanks for your RM work.
>>
>> > On Mar 3, 2016, at 12:53 PM, Nicholas Knize  wrote:
>>
>> > > The mistake was to freeze the 6x branch in the first place. The
>> release branch is the one which should be frozen.
>> >
>> >  I certainly agree with this. However, over a week ago there was a
>> request to hold off on creating the 6_0 branch until Jenkins settled with a
>> 6x. I received no push back on this suggestion so this was the plan that
>> was executed (several days after that request was sent).
>>
>> I guess I took this as meaning a freeze on *branch_6x* of hours
>> (acceptable), not days (unacceptable).
>>
>> > I think Mike is suggesting, and I agree with this, there needs to be a
>> reasonable amount of time given for someone to respond.
>>
>>
>> My impression was that you were intentionally ignoring questions about
>> creation of the 6.0 branch, since there were multiple questions about it
>> from more than one person over a couple days with no response from you, but
>> meanwhile, you responded on other threads.  (Sorry, I haven’t gone back and
>> found the exact messages that left me with this impression, so I guess I
>> could be wrong.)
>>
>> One of the RM’s most important responsibilities is timely communication.
>> If you’re going to be AFK for extended periods, please let people know.
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


-- 
Anshum Gupta


[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2016-03-03 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178389#comment-15178389
 ] 

Ben Manes commented on SOLR-8241:
-

Using the metrics library should be really easy. There are two simple 
implementation approaches,

1. Use the same approach as [Guava 
metrics|http://antrix.net/posts/2014/codahale-metrics-guava-cache] that polls 
the cache's stats. Caffeine is the next gen, so it has a nearly identical API.
2. Use a custom 
[StatsCounter|http://static.javadoc.io/com.github.ben-manes.caffeine/caffeine/2.2.2/com/github/benmanes/caffeine/cache/stats/StatsCounter.html]
 and {{Caffeine.recordStats(statsCounter)}} that records directly into the 
metrics. This rejected feature 
[request|https://github.com/google/guava/issues/2209#issuecomment-153290342] 
shows an example of that, though I'd return a {{disabledStatsCounter()}} 
instead of throwing an exception if polled.

The only annoyance is neither Guava or Caffeine bothered to include a {{put}} 
statistic. That was partially an oversight and partially because we really 
wanted everyone to load through the cache (put is often an anti-pattern due to 
races). I forgot to add it in with v2 and due to being an API change semvar 
would require that it be in v3 or maybe we can use a [default 
method|https://blog.idrsolutions.com/2015/01/java-8-default-methods-explained-5-minutes/]
 hack for sneaking it into v2.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Possibly spoofed] Re: Solr/Lucene 6.x: Multiple public Query classes not immutable (yet)

2016-03-03 Thread Nicholas Knize
Hi Luc, I'm OK with you backporting this to 6.0. I think its important for
shoring up the API.

On Thu, Mar 3, 2016 at 8:24 AM, Adrien Grand  wrote:

>
>
> Le jeu. 3 mars 2016 à 15:13, Vanlerberghe, Luc <
> luc.vanlerber...@bvdinfo.com> a écrit :
>
>> -  I didn’t leave any public MultiPhraseQuery constructors like
>> you did for PhraseQuery.  Adding a few afterwards shouldn’t break anything
>> though.
>>
>
> I think it's good this way: I added them for PhraseQuer because I thought
> it should be easy to create simple phrase queries but maybe it was a
> mistake. MultiPhraseQuery on the other hand is an expert query so I'm
> totally fine with not having convenience constructors.
>
>
>> -  The private termArrays and positions members could become
>> fixed arrays like you did for PhraseQuery.  This would change the signature
>> of getTermArrays() and getPositions(), so perhaps it should happen now…
>>
> Actually I think returning a list is better: with arrays you need to
> perform a deep copy if you want to make sure that the user cannot change
> the internal state of the query. We could keep arrays internally and call
> Collections.unmodifiableList(Arrays.asList(termArrays)) when returning the
> terms to the user?
>
>


[jira] [Comment Edited] (SOLR-8770) BinaryRequestWriter interprets null object in field as literal "NULL" string

2016-03-03 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178363#comment-15178363
 ] 

Shawn Heisey edited comment on SOLR-8770 at 3/3/16 6:52 PM:


Is there any acceptable reason to allow a null field value in a 
SolrInputDocument?  Specifically I'm wondering if we could throw 
IllegalArgumentException if a null object is used on methods like addField and 
setField.

Fixing what I noticed about the binary writer would still be a good idea.



was (Author: elyograg):
Is there any acceptable reason to allow a null field value in a 
SolrInputDocument?  Specifically I'm wondering if we could throw 
IllegalArgumentException if a null object is used on methods like addField and 
setField.

> BinaryRequestWriter interprets null object in field as literal "NULL" string
> 
>
> Key: SOLR-8770
> URL: https://issues.apache.org/jira/browse/SOLR-8770
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.5
>Reporter: Shawn Heisey
>
> From what I've been able to determine, if a null object is added with 
> SolrInputDocument#addField, the xml writer does not include that field in the 
> request, but the binary writer sends the literal string "NULL".
> This became apparent when upgrading SolrJ to 5.5, which uses the binary 
> writer by default.  Switching back to 5.4.1 fixed it, until I forced the 
> 5.4.1 client to use the binary writer.  My source data is MySQL.  JDBC is 
> where the null objects are coming from.
> Adding a null check to my doc.addField call has fixed my program with the 5.5 
> client, but this is likely to catch other upgrading users off guard.
> At the very least, the 5.5.1 CHANGES.txt file needs a note, but I believe the 
> behavior of the binary writer should match the behavior of the xml writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178366#comment-15178366
 ] 

ASF subversion and git services commented on SOLR-8725:
---

Commit 6de2b7dbd17fc70fc9b2b053fe2628534116309b in lucene-solr's branch 
refs/heads/master from anshum
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6de2b7d ]

SOLR-8725: Allow hyphen in shard, collection, core, and alias names but not the 
first char


> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Attachments: SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8770) BinaryRequestWriter interprets null object in field as literal "NULL" string

2016-03-03 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178363#comment-15178363
 ] 

Shawn Heisey commented on SOLR-8770:


Is there any acceptable reason to allow a null field value in a 
SolrInputDocument?  Specifically I'm wondering if we could throw 
IllegalArgumentException if a null object is used on methods like addField and 
setField.

> BinaryRequestWriter interprets null object in field as literal "NULL" string
> 
>
> Key: SOLR-8770
> URL: https://issues.apache.org/jira/browse/SOLR-8770
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.5
>Reporter: Shawn Heisey
>
> From what I've been able to determine, if a null object is added with 
> SolrInputDocument#addField, the xml writer does not include that field in the 
> request, but the binary writer sends the literal string "NULL".
> This became apparent when upgrading SolrJ to 5.5, which uses the binary 
> writer by default.  Switching back to 5.4.1 fixed it, until I forced the 
> 5.4.1 client to use the binary writer.  My source data is MySQL.  JDBC is 
> where the null objects are coming from.
> Adding a null check to my doc.addField call has fixed my program with the 5.5 
> client, but this is likely to catch other upgrading users off guard.
> At the very least, the 5.5.1 CHANGES.txt file needs a note, but I believe the 
> behavior of the binary writer should match the behavior of the xml writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2016-03-03 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178352#comment-15178352
 ] 

Shawn Heisey commented on SOLR-8241:


I'm pretty sure that no matter what benchmarks we run, your implementation will 
be MUCH better than my current implementation.  If we put this in, which I am 
in favor of doing as soon as we can, I believe it should replace LFUCache.

Code simplicity alone probably makes it better than my improved implementation 
that isn't committed (SOLR-3393).

I wonder if it might be possible for Solr's cache implementations (including 
this one) to use the codahale metrics library (already in Solr) to record 
statistics about eviction time.  Evictions are the pain point for a cache 
implementation, and being able to compare results with different cache 
implementations would be awesome.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: SOLR-8241.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8731) onException behavior in search components

2016-03-03 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178314#comment-15178314
 ] 

Dean Gurvitz commented on SOLR-8731:


Hello, I would really appreciate if someone took the time to inspect this 
issue. The patch is very small and simple, and reviewing it shouldn't take long

> onException behavior in search components
> -
>
> Key: SOLR-8731
> URL: https://issues.apache.org/jira/browse/SOLR-8731
> Project: Solr
>  Issue Type: New Feature
>  Components: SearchComponents - other
>Affects Versions: master
>Reporter: Dean Gurvitz
>Priority: Minor
>  Labels: features, newdev
> Fix For: master
>
> Attachments: SOLR-8731.patch
>
>
> The idea is to allow search components to execute logic in case an exception  
> was thrown while processing a query.
> A new "onException" function can be added to the SearchComponent class. Then, 
> parts of SearchHandler's handle-request functions can be surrounded in a 
> try-catch block, where onException is called within the catch section on all 
> relevant SearchComponents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Nicholas Knize
> hours (acceptable), not days (unacceptable).

++ I definitely agree with this. And it looks like the time period here was
less than a day?

>  there were multiple questions about it from more than one person over a
couple days

?? I do not see these questions? They're certainly not in this thread which
is where all of the branching was being discussed. If there are separate
conversation threads then I think as the RM I should know about them?

> If you’re going to be AFK for extended periods, please let people know.

++ This is definitely important. I'm not sure I agree that < 24 hours
constitutes an extended period in this case. Especially given that its the
first major release on the git infrastructure?

Regardless, thank you to everyone that helped settle these branches.

- Nick



On Thu, Mar 3, 2016 at 12:09 PM, Steve Rowe  wrote:

> First, Nick, thanks for your RM work.
>
> > On Mar 3, 2016, at 12:53 PM, Nicholas Knize  wrote:
>
> > > The mistake was to freeze the 6x branch in the first place. The
> release branch is the one which should be frozen.
> >
> >  I certainly agree with this. However, over a week ago there was a
> request to hold off on creating the 6_0 branch until Jenkins settled with a
> 6x. I received no push back on this suggestion so this was the plan that
> was executed (several days after that request was sent).
>
> I guess I took this as meaning a freeze on *branch_6x* of hours
> (acceptable), not days (unacceptable).
>
> > I think Mike is suggesting, and I agree with this, there needs to be a
> reasonable amount of time given for someone to respond.
>
>
> My impression was that you were intentionally ignoring questions about
> creation of the 6.0 branch, since there were multiple questions about it
> from more than one person over a couple days with no response from you, but
> meanwhile, you responded on other threads.  (Sorry, I haven’t gone back and
> found the exact messages that left me with this impression, so I guess I
> could be wrong.)
>
> One of the RM’s most important responsibilities is timely communication.
> If you’re going to be AFK for extended periods, please let people know.
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-Tests-6.x - Build # 3 - Still Failing

2016-03-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/3/

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testAllVersionsTested

Error Message:
Extra backcompat test files:   5.5.0-cfs 

Stack Trace:
java.lang.AssertionError: Extra backcompat test files:
  5.5.0-cfs

at 
__randomizedtesting.SeedInfo.seed([592202A6A2B9E7D1:49FDEDE6162EF1FD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testAllVersionsTested(TestBackwardsCompatibility.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 15 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 

Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Steve Rowe
First, Nick, thanks for your RM work.

> On Mar 3, 2016, at 12:53 PM, Nicholas Knize  wrote:

> > The mistake was to freeze the 6x branch in the first place. The release 
> > branch is the one which should be frozen.
> 
>  I certainly agree with this. However, over a week ago there was a request to 
> hold off on creating the 6_0 branch until Jenkins settled with a 6x. I 
> received no push back on this suggestion so this was the plan that was 
> executed (several days after that request was sent). 

I guess I took this as meaning a freeze on *branch_6x* of hours (acceptable), 
not days (unacceptable).

> I think Mike is suggesting, and I agree with this, there needs to be a 
> reasonable amount of time given for someone to respond.


My impression was that you were intentionally ignoring questions about creation 
of the 6.0 branch, since there were multiple questions about it from more than 
one person over a couple days with no response from you, but meanwhile, you 
responded on other threads.  (Sorry, I haven’t gone back and found the exact 
messages that left me with this impression, so I guess I could be wrong.)

One of the RM’s most important responsibilities is timely communication.  If 
you’re going to be AFK for extended periods, please let people know.

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Dawid Weiss
I wouldn't read so, to be honest. This isn't so convoluted:

https://git-scm.com/docs/git-diff

Quote:

git diff [--options]   [--] […]
This is to view the changes between two arbitrary .

git diff [--options] .. [--] […]
This is synonymous to the previous form.

I use it all the time (never used the triple dot notation, to be
honest). It is what you expect it to be -- it's essentially a diff of
changes between two commits (as if you checked them out into two
folders and ran a recursive diff on both).

Dawid

On Thu, Mar 3, 2016 at 6:58 PM, Steve Rowe  wrote:
> Nearly all of my git knowledge is from possibly wrongheaded stack overflow 
> posts - in this case 
>  
> where chaiyachaiya said "git diff b1..b2, show you what is in b2 that is not 
> in b1. So git diff b1..b2 and git diff b2..b1 will not have the same output.”
>
> Re-reading that comment I see I missed the next sentence: "On the contrary, 
> git b1...b2, show you what is in b1 XOR b2 (either b1 or b2 but not both). So 
> git b1...b2 is equal to git b2...b1"
>
> Shoulda used the triple-dot syntax instead of the double-dot syntax.  Git, I 
> love you.
>
> --
> Steve
> www.lucidworks.com
>
>> On Mar 3, 2016, at 12:49 PM, Dawid Weiss  wrote:
>>
>>> (with minor solr/CHANGES.txt fixups) and then diffing both directions:
>>>
>>> git diff branch_6_0..branch_6x
>>> git diff branch_6x..branch_6_0
>>
>> Wait, can it ever be assymetric? I'd say it's impossible -- it should
>> always be a "reverse" diff of the another?
>>
>> Dawid
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Steve Rowe
Nearly all of my git knowledge is from possibly wrongheaded stack overflow 
posts - in this case 
 
where chaiyachaiya said "git diff b1..b2, show you what is in b2 that is not in 
b1. So git diff b1..b2 and git diff b2..b1 will not have the same output.”

Re-reading that comment I see I missed the next sentence: "On the contrary, git 
b1...b2, show you what is in b1 XOR b2 (either b1 or b2 but not both). So git 
b1...b2 is equal to git b2...b1"

Shoulda used the triple-dot syntax instead of the double-dot syntax.  Git, I 
love you.

--
Steve
www.lucidworks.com

> On Mar 3, 2016, at 12:49 PM, Dawid Weiss  wrote:
> 
>> (with minor solr/CHANGES.txt fixups) and then diffing both directions:
>> 
>> git diff branch_6_0..branch_6x
>> git diff branch_6x..branch_6_0
> 
> Wait, can it ever be assymetric? I'd say it's impossible -- it should
> always be a "reverse" diff of the another?
> 
> Dawid
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178236#comment-15178236
 ] 

Uwe Schindler commented on LUCENE-7063:
---

Thanks! I was about to mention this here, too. TestNumericUtils was written 
like 7 years ago (without randomization), but the tests are good. They check 
that compareto of double/float and the sortable long/int behave identical, 
especially with INFINITY and NAN. We should really test this. We can also 
randomize the tests now: Create a huge number of floats/doubles and sort them. 
After that convert every value from sorted random array to a sortableInteger 
and check that they are also increasing. The NaN and similar special value 
tests should of course stay as is.

> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Nicholas Knize
> The mistake was to freeze the 6x branch in the first place. The release
branch is the one which should be frozen.

 I certainly agree with this. However, over a week ago there was a request
to hold off on creating the 6_0 branch until Jenkins settled with a 6x. I
received no push back on this suggestion so this was the plan that was
executed (several days after that request was sent).

> I specifically asked the RM to cut the branch to let others progress I
received no replies -- which is why I was forced to do it myself.

Your email was sent to me yesterday and I was on travel so didn't see it
until today (right when the branch was created in fact). Because of time
zones and other jobs I usually like to give people > 24 hours to reply.
That being said, if the collective needs to move forward I'm certainly not
insulted if someone else cuts a branch. I don't think any RM should be.

> The rest of the problem was because I am new to Git

I am not new to git so I had no problem creating the 6_0 branch, it just
seems I didn't get online soon enough.

I think Mike is suggesting, and I agree with this, there needs to be a
reasonable amount of time given for someone to respond.

- Nick


On Thu, Mar 3, 2016 at 10:30 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> Mike,
>
> I'll fix the TestBackwardsCompatibility. The mistake was to freeze the
> 6x branch in the first place. The release branch is the one which
> should be frozen. I specifically asked the RM to cut the branch to let
> others progress but I received no replies -- which is why I was forced
> to do it myself. In future, the RM should keep this in mind and not
> block others. The rest of the problem was because I am new to Git --
> in subversion a release branch is always copied from the server so
> pulling latest changes locally before creating the branch did not
> cross my mind.
>
> On Thu, Mar 3, 2016 at 9:46 PM, Michael McCandless
>  wrote:
> > Shalin,
> >
> > In the future please don't jump the gun like this?
> >
> > It has caused a lot of unnecessary chaos.  It should be the RM, and
> > only the RM, that is doing things like creating release branches,
> > bumping versions, etc., at release time.
> >
> > Also, your changes to bump the version on 6.x seem to be causing
> > TestBackwardsCompatibility to be failing.  Can you please fix that?
> > In the future, when you are RM, please run tests when bumping versions
> > before pushing.
> >
> > A major release is hard enough with only one chef.
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> >
> > On Thu, Mar 3, 2016 at 8:52 AM, Shalin Shekhar Mangar
> >  wrote:
> >> Hmm I think I created the branch without pulling the latest code. I'll
> fix.
> >>
> >> On Thu, Mar 3, 2016 at 6:41 PM, Robert Muir  wrote:
> >>> This is missing a bunch of yesterday's branch_6x changes. Some of
> >>> david smiley's spatial work, at least one of my commits.
> >>>
> >>> On Thu, Mar 3, 2016 at 5:10 AM, Shalin Shekhar Mangar
> >>>  wrote:
>  FYI, I have created the branch_6_0 so that we can continue to commit
>  stuff intended for 6.1 on master and branch_6x. I have also added the
>  6.1.0 version on branch_6x and master.
> 
>  On Wed, Mar 2, 2016 at 9:51 PM, Shawn Heisey 
> wrote:
> > On 3/2/2016 4:19 AM, Alan Woodward wrote:
> >> Should we create a separate branch_6_0 branch for the
> feature-freeze?
> >>  I have stuff to push into master and that should eventually make it
> >> into 6.1, and it will be easy to forget to backport stuff if
> there's a
> >> week before I can do that…
> >
> > +1
> >
> > When I saw Nick's email about branch_6x being feature frozen, my
> first
> > thought was that we don't (and really can't) feature freeze the
> stable
> > branch -- isn't new feature development (for the next minor release
> in
> > the current major version) the entire purpose of branch_Nx?
> >
> > A feature freeze on a specific minor version does make sense.  I've
> seen
> > a couple of people say that we have, but there are also a few
> messages
> > from people saying that they want to include new functionality in
> 6.0.
> > I expect that backporting almost anything from branch_6x to
> branch_6_0
> > will be relatively easy, so it may be a good idea to just create the
> new
> > branch.
> >
> > Thanks,
> > Shawn
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> 
> 
> 
>  --
>  Regards,
>  Shalin Shekhar Mangar.
> 
>  -
>  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>  

Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Dawid Weiss
> (with minor solr/CHANGES.txt fixups) and then diffing both directions:
>
> git diff branch_6_0..branch_6x
> git diff branch_6x..branch_6_0

Wait, can it ever be assymetric? I'd say it's impossible -- it should
always be a "reverse" diff of the another?

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Steve Rowe
I compared the state of branch_6x and branch_6_0 after Mike merged LUCENE-7062 
by cherry-picking Shalin's 6.1-only commits into my local branch_6_0:

e4712bb028849f9a9b202651728c1f5c0a224374 (SOLR-8722)  
97db2d0b932ceae17fc6ab442af0b32f54928e05 (Adding version 6.1.0)
d346af3994fd2784c8550ccfea1f1d22afa0cd32 (SOLR-7516)

(with minor solr/CHANGES.txt fixups) and then diffing both directions:

git diff branch_6_0..branch_6x
git diff branch_6x..branch_6_0

and there were zero differences.

I think we’re good.

--
Steve
www.lucidworks.com

> On Mar 3, 2016, at 10:52 AM, Steve Rowe  wrote:
> 
> Shalin, I will take a look.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Mar 3, 2016, at 10:25 AM, Shalin Shekhar Mangar  
>> wrote:
>> 
>> I cherry-picked the following missing commits. I think I got all of
>> the missing ones but another set of eyes would be good.
>> 
>> Cherry-pick successful
>>  3cbc48e LUCENE-7059: always visit 1D points in sorted
>> order; fix tie-break but in BKDWriter; fix BKDWriter to pass on
>> maxMBSortInHeap to the OfflineSorter too
>>  e1033d9 SOLR-8145: Fix position of OOM killer script when
>> starting Solr in the background
>>  ddd019f SOLR-8145: mention fix in solr/CHANGES.txt
>>  25cc48b LUCENE-7059: remove MultiPointValues
>>  8eada27 LUCENE-7061: fix remaining api issues with XYZPoint classes
>>  b90dbd4 LUCENE-7060: Spatial4j 0.6 upgrade. Package
>> com.spatial4j.core -> org.locationtech.spatial4j (cherry picked from
>> commit 569b6ca)
>>  6dcb01c SOLR-8764: test schema-latest.xml spatial dist
>> units should be kilometers (no test uses yet?) (cherry picked from
>> commit deb6a49)
>> 
>> On Thu, Mar 3, 2016 at 7:22 PM, Shalin Shekhar Mangar
>>  wrote:
>>> Hmm I think I created the branch without pulling the latest code. I'll fix.
>>> 
>>> On Thu, Mar 3, 2016 at 6:41 PM, Robert Muir  wrote:
 This is missing a bunch of yesterday's branch_6x changes. Some of
 david smiley's spatial work, at least one of my commits.
 
 On Thu, Mar 3, 2016 at 5:10 AM, Shalin Shekhar Mangar
  wrote:
> FYI, I have created the branch_6_0 so that we can continue to commit
> stuff intended for 6.1 on master and branch_6x. I have also added the
> 6.1.0 version on branch_6x and master.
> 
> On Wed, Mar 2, 2016 at 9:51 PM, Shawn Heisey  wrote:
>> On 3/2/2016 4:19 AM, Alan Woodward wrote:
>>> Should we create a separate branch_6_0 branch for the feature-freeze?
>>> I have stuff to push into master and that should eventually make it
>>> into 6.1, and it will be easy to forget to backport stuff if there's a
>>> week before I can do that…
>> 
>> +1
>> 
>> When I saw Nick's email about branch_6x being feature frozen, my first
>> thought was that we don't (and really can't) feature freeze the stable
>> branch -- isn't new feature development (for the next minor release in
>> the current major version) the entire purpose of branch_Nx?
>> 
>> A feature freeze on a specific minor version does make sense.  I've seen
>> a couple of people say that we have, but there are also a few messages
>> from people saying that they want to include new functionality in 6.0.
>> I expect that backporting almost anything from branch_6x to branch_6_0
>> will be relatively easy, so it may be a good idea to just create the new
>> branch.
>> 
>> Thanks,
>> Shawn
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> 
> 
> --
> Regards,
> Shalin Shekhar Mangar.
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
>>> 
>>> 
>>> 
>>> --
>>> Regards,
>>> Shalin Shekhar Mangar.
>> 
>> 
>> 
>> -- 
>> Regards,
>> Shalin Shekhar Mangar.
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178212#comment-15178212
 ] 

Michael McCandless commented on LUCENE-7063:


Thanks [~rcmuir]!

> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7063) NumericUtils vs LegacyNumericUtils chaos with 6.0

2016-03-03 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178199#comment-15178199
 ] 

Robert Muir commented on LUCENE-7063:
-

I'm gonna update this patch after investigating the tests. 
TestLegacyNumericUtils has a lot of nice unit tests, we should make sure none 
of these "get lost" in the sense, they are probably still possible to port 
forward to the new full byte[] range encoding and so on. I think this makes 
sense to just solve all here, so we know floats and doubles are really totally 
working correct and other things that are super important for core numeric 
fields.

> NumericUtils vs LegacyNumericUtils chaos with 6.0
> -
>
> Key: LUCENE-7063
> URL: https://issues.apache.org/jira/browse/LUCENE-7063
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7063.patch
>
>
> Old prefix-coded terms helper functions are still available in 
> LegacyNumericUtils, but its confusing when upgrading because NumericUtils and 
> LegacyNumericUtils have overlaps in the APIs.
> One issue is they share some exact methods that are completely unrelated to 
> this encoding (e.g. floatToSortableInt). The method is just duplication and 
> worst, most lucene code is still calling it from LegacyNumericUtils, even 
> stuff like faceting code using it with docvalues.
> Another issue is the new NumericUtils methods (which use full byte range) 
> have vague names, no javadocs, expose helper methods as public unnecessarily, 
> and cause general confusion.
> I don't think NumericUtils and LegacyNumericUtils should overlap. 
> LegacyNumericUtils should only contain legacy stuff!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.0.0 Release Branch

2016-03-03 Thread Michael McCandless
On Thu, Mar 3, 2016 at 10:58 AM, Robert Muir  wrote:

> I think mike has not merged his checkindex work but I am surprised to
> see merge conflicts?

OK I was able to merge my 6.x push to 6.0 with no conflicts, a good sign!

> Mike can you make sure your readint/readvint
> mismatch and other important bugfixes are not missing here?

I did a full source tree diff between 6.0 and master and reviewed all
the changes, and found a pre-existing minor bug (fixed), but otherwise
it looks like all master issues were successfully backported.

Thanks for fixing things, Shalin.

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_72) - Build # 4 - Still Failing!

2016-03-03 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/4/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testAllVersionsTested

Error Message:
Extra backcompat test files:   5.5.0-cfs 

Stack Trace:
java.lang.AssertionError: Extra backcompat test files:
  5.5.0-cfs

at 
__randomizedtesting.SeedInfo.seed([CAA5A6BE935B61DE:DA7A49FE27CC77F2]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testAllVersionsTested(TestBackwardsCompatibility.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4243 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestBackwardsCompatibility -Dtests.method=testAllVersionsTested 
-Dtests.seed=CAA5A6BE935B61DE -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=sr-RS -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.01s J2 | TestBackwardsCompatibility.testAllVersionsTested 
<<<
   [junit4]> Throwable #1: java.lang.AssertionError: Extra backcompat test 
files:
   [junit4]>   5.5.0-cfs
   [junit4]>at 

Re: Dropping branch_5x Jenkins jobs

2016-03-03 Thread Michael McCandless
+1 to get 5.5 jobs running again, and not run 5.x jobs unless we
somehow (oddly) want to do a 5.6 in the future.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Mar 3, 2016 at 11:38 AM, Steve Rowe  wrote:
> Assuming there won’t be a 5.6 release, we should drop the 5.x jobs on 
> Jenkins, and disable but keep around the 5.5 jobs, to be used for a 5.5.1 
> release.
>
> FYI, I removed the 5.5 jobs from ASF Jenkins a couple days ago (accidentally 
> for the first one, intending to just disable, but after I removed the first 
> one I just continued down that path…), but these are easy enough to clone 
> from the existing branch_5x jobs.
>
> Thoughts?
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >