[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+113) - Build # 16464 - Still Failing!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16464/
Java: 64bit/jdk-9-ea+113 -XX:+UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([765A5BDF1518515:FC9E979E41CA2249]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 4255 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.03s J0 | 
TestBackwardsCompatibility.testCreateMoreTermsIndex
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J0 | TestBackwardsCompatibility.testCreateNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 

[jira] [Comment Edited] (LUCENE-7197) Geo3DPoint test failure

2016-04-08 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233358#comment-15233358
 ] 

Karl Wright edited comment on LUCENE-7197 at 4/9/16 5:19 AM:
-

I augmented [~mikemccand]'s explain output to include relationship information, 
which then leads to the following line in the forensics dump from this failure:

{code}
   [junit4]>   Cell(x=-1.0011088352687925 TO 1.0008262509460042 
y=-1.000763585323458 TO 0.007099080851812687 z=0.9826918170382273 TO 
0.9944024015823575); Shape relationship = DISJOINT; Point within cell = true; 
Point within shape = true
{code}

>From this it looked plausible that the cell is off the top of the world in z, 
>and thus did not intersect with it for that reason.  But:

{code}
   [junit4]>   world bounds=( minX=-1.0011188539924791 
maxX=1.0011188539924791 minY=-1.0011188539924791 maxY=1.0011188539924791 
minZ=-0.9977622920221051 maxZ=0.9977622920221051
{code}

So I'll have to work a bit harder to see why no intersection is detected.



was (Author: kwri...@metacarta.com):
I augmented [~mikemccand]'s explain output to include relationship information, 
which then leads to the following line in the forensics dump from this failure:

{code}
   [junit4]>   Cell(x=-1.0011088352687925 TO 1.0008262509460042 
y=-1.000763585323458 TO 0.007099080851812687 z=0.9826918170382273 TO 
0.9944024015823575); Shape relationship = DISJOINT; Point within cell = true; 
Point within shape = true
{code}

>From this it looks plausible that the cell is off the top of the world in z, 
>and thus does not intersect with it for that reason.  Confirming now.

> Geo3DPoint test failure
> ---
>
> Key: LUCENE-7197
> URL: https://issues.apache.org/jira/browse/LUCENE-7197
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Here's the test failure:
> {code}
> 1 tests failed.
> FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium
> Error Message:
> FAIL: id=174 should have matched but did not   
> shape=GeoCompositeMembershipShape: {[GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.022713796927720124, 
> lon=-0.5815768716211268], [lat=-0.7946950025059678, lon=0.4282667468610472], 
> [lat=0.1408596217595416, lon=-0.6098466679977738], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={3}, holes=[]}, GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.7965414531411178, 
> lon=-1.508912143762057], [lat=-0.7033145536783815, lon=0.4464269851814595], 
> [lat=1.0518471862927206, lon=-2.5948050766629582]], internalEdges={1, 2}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
> [lat=1.0518471862927206, lon=-2.5948050766629582], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={1, 2}, holes=[]}, GeoConcavePolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
> lon=2.6487090531266837], [lat=0.7965414531411178, lon=-1.508912143762057], 
> [lat=0.022713796927720124, lon=-0.5815768716211268], 
> [lat=-0.5523784795034061, lon=-0.05097322618399754]], internalEdges={0, 1}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=1.0518471862927206, lon=-2.5948050766629582], 
> [lat=-0.7033145536783815, lon=0.4464269851814595], [lat=-0.32165871220450476, 
> lon=-0.2306821921389016]], internalEdges={0}, holes=[]}]}   
> point=[X=0.026933631938481088, Y=-0.1374352774704889, Z=0.9879509190301249]   
> docID=164 deleted?=false   query=PointInGeo3DShapeQuery: field=point: Shape: 
> GeoCompositeMembershipShape: {[GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.022713796927720124, 
> lon=-0.5815768716211268], [lat=-0.7946950025059678, lon=0.4282667468610472], 
> [lat=0.1408596217595416, lon=-0.6098466679977738], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={3}, holes=[]}, GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.7965414531411178, 
> lon=-1.508912143762057], [lat=-0.7033145536783815, lon=0.4464269851814595], 
> [lat=1.0518471862927206, lon=-2.5948050766629582]], internalEdges={1, 2}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
> [lat=1.0518471862927206, lon=-2.5948050766629582], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={1, 2}, holes=[]}, GeoConcavePolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
> lon=2.6487090531266837], [lat=0.7965414531411178, lon=-1.508912143762057], 
> [lat=0.022713796927720124, lon=-0.5815768716211268], 
> [lat=-0.5523784795034061, 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 507 - Still Failing!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/507/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([6290255EC5916DDA:996B177D750ACA86]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4257 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.00s J1 | TestBackwardsCompatibility.testCreateCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J1 | 
TestBackwardsCompatibility.testCreateSingleSegmentNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J1 | 
TestBackwardsCompatibility.testCreateMoreTermsIndex
   

[jira] [Commented] (LUCENE-7197) Geo3DPoint test failure

2016-04-08 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233358#comment-15233358
 ] 

Karl Wright commented on LUCENE-7197:
-

I augmented [~mikemccand]'s explain output to include relationship information, 
which then leads to the following line in the forensics dump from this failure:

{code}
   [junit4]>   Cell(x=-1.0011088352687925 TO 1.0008262509460042 
y=-1.000763585323458 TO 0.007099080851812687 z=0.9826918170382273 TO 
0.9944024015823575); Shape relationship = DISJOINT; Point within cell = true; 
Point within shape = true
{code}

>From this it looks plausible that the cell is off the top of the world in z, 
>and thus does not intersect with it for that reason.  Confirming now.

> Geo3DPoint test failure
> ---
>
> Key: LUCENE-7197
> URL: https://issues.apache.org/jira/browse/LUCENE-7197
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Here's the test failure:
> {code}
> 1 tests failed.
> FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium
> Error Message:
> FAIL: id=174 should have matched but did not   
> shape=GeoCompositeMembershipShape: {[GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.022713796927720124, 
> lon=-0.5815768716211268], [lat=-0.7946950025059678, lon=0.4282667468610472], 
> [lat=0.1408596217595416, lon=-0.6098466679977738], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={3}, holes=[]}, GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.7965414531411178, 
> lon=-1.508912143762057], [lat=-0.7033145536783815, lon=0.4464269851814595], 
> [lat=1.0518471862927206, lon=-2.5948050766629582]], internalEdges={1, 2}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
> [lat=1.0518471862927206, lon=-2.5948050766629582], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={1, 2}, holes=[]}, GeoConcavePolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
> lon=2.6487090531266837], [lat=0.7965414531411178, lon=-1.508912143762057], 
> [lat=0.022713796927720124, lon=-0.5815768716211268], 
> [lat=-0.5523784795034061, lon=-0.05097322618399754]], internalEdges={0, 1}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=1.0518471862927206, lon=-2.5948050766629582], 
> [lat=-0.7033145536783815, lon=0.4464269851814595], [lat=-0.32165871220450476, 
> lon=-0.2306821921389016]], internalEdges={0}, holes=[]}]}   
> point=[X=0.026933631938481088, Y=-0.1374352774704889, Z=0.9879509190301249]   
> docID=164 deleted?=false   query=PointInGeo3DShapeQuery: field=point: Shape: 
> GeoCompositeMembershipShape: {[GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.022713796927720124, 
> lon=-0.5815768716211268], [lat=-0.7946950025059678, lon=0.4282667468610472], 
> [lat=0.1408596217595416, lon=-0.6098466679977738], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={3}, holes=[]}, GeoConvexPolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.7965414531411178, 
> lon=-1.508912143762057], [lat=-0.7033145536783815, lon=0.4464269851814595], 
> [lat=1.0518471862927206, lon=-2.5948050766629582]], internalEdges={1, 2}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
> [lat=1.0518471862927206, lon=-2.5948050766629582], [lat=0.7965414531411178, 
> lon=-1.508912143762057]], internalEdges={1, 2}, holes=[]}, GeoConcavePolygon: 
> {planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
> lon=2.6487090531266837], [lat=0.7965414531411178, lon=-1.508912143762057], 
> [lat=0.022713796927720124, lon=-0.5815768716211268], 
> [lat=-0.5523784795034061, lon=-0.05097322618399754]], internalEdges={0, 1}, 
> holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
> points=[[lat=1.0518471862927206, lon=-2.5948050766629582], 
> [lat=-0.7033145536783815, lon=0.4464269851814595], [lat=-0.32165871220450476, 
> lon=-0.2306821921389016]], internalEdges={0}, holes=[]}]}   explanation: 
> target is in leaf _w(6.1.0):C23769 of full reader 
> StandardDirectoryReader(segments:70:nrt _w(6.1.0):C23769) full BKD path 
> to target doc:   Cell(x=-1.0011088352687925 TO -0.6552151159613697 
> y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.997762292058209 TO 
> -0.345618552495)   Cell(x=-1.0011088352687925 TO -0.6552151159613697 
> y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.34561855287730175 TO 
> -0.010034327274334781)   Cell(x=-0.6552151164275519 TO 
> -0.49692303680890554 y=-1.000763585323458 TO 2.587423617105566E-4 
> z=-0.997762292058209 TO 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+113) - Build # 376 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/376/
Java: 64bit/jdk-9-ea+113 -XX:+UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString

Error Message:
expected:<...at=0.772208221547936[6], lon=0.135607475210...> but 
was:<...at=0.772208221547936[7], lon=0.135607475210...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...at=0.772208221547936[6], 
lon=0.135607475210...> but was:<...at=0.772208221547936[7], 
lon=0.135607475210...>
at 
__randomizedtesting.SeedInfo.seed([9167847D36BA3FF9:B8F15B8B1448A775]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString(TestGeo3DPoint.java:812)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 8991 lines...]
   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4] IGNOR/A 0.01s J0 | TestGeo3DPoint.testRandomBig
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testShapeQueryToString 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 32 - Still Failing

2016-04-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/32/

5 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:41346/o_d/nz

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:41346/o_d/nz
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:601)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:381)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deletePartiallyCreatedCollection(CollectionsAPIDistributedZkTest.java:243)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:171)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-6.0-Linux (64bit/jdk1.8.0_72) - Build # 51 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.0-Linux/51/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0}
at 
__randomizedtesting.SeedInfo.seed([81D1E07BC4B1DDB9:985DFA16A4DB041]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:165)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16463 - Still Failing!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16463/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([C5FD4B49E6C5700D:3E06796A565ED751]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4266 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.01s J0 | 
TestBackwardsCompatibility.testCreateMoreTermsIndex
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestBackwardsCompatibility -Dtests.method=testUnsupportedOldIndexes 
-Dtests.seed=C5FD4B49E6C5700D -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=es-MX -Dtests.timezone=America/Indiana/Petersburg 
-Dtests.asserts=true 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3194 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3194/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([7FA7B2AA6171EC5C:845C8089D1EA4B00]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4250 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.02s J0 | TestBackwardsCompatibility.testCreateNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J0 | 
TestBackwardsCompatibility.testCreateSingleSegmentCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J0 | 
TestBackwardsCompatibility.testCreateMoreTermsIndex
   [junit4]> 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
 gatherNodes(friends
 search(articles, q=“body:(queryA)”, fl=“author”),
 walk ="author->user”,
 gather="friend"),
 walk=“friend->user”,
 gather="friend",
 scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend->user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend->user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
 gatherNodes(friends
 search(articles, q=“body:(queryA)”, fl=“author”),
 walk ="author->user”,
 gather="friend"),
 walk=“friend -> user”,
 gather="friend",
 scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
 gatherNodes(friends
  search(articles, q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
 walk=“friend -> user”,
 gather="friend",
 scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
   gatherNodes(friends
   search(articles, q=“body:(queryA)”, 
fl=“author”),
   walk ="author->user”,
   gather="friend"),
walk=“friend -> user”,
gather="friend",
scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
 gatherNodes(friends
 search(articles, q=“body:(queryA)”, fl=“author”),
 walk ="author->user”,
 gather="friend"),
 walk=“friend -> user”,
 gather="friend",
 scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
 gatherNodes(friends
  search(articles, q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
 walk=“friend -> user”,
 gather="friend",
 scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
   gatherNodes(friends
   search(articles, q=“body:(queryA)”, 
fl=“author”),
   walk ="author->user”,
   gather="friend"),
walk=“friend -> user”,
gather="friend",
scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
   gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
walk=“friend -> user”,
gather="friend",
scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
   gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
walk=“friend -> user”,
gather="friend",
scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}

 gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 

[jira] [Commented] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233244#comment-15233244
 ] 

Joel Bernstein commented on SOLR-8925:
--

I think the direction of the traversal is interesting. The direction of the 
traversal is to walk or traverse from author to user.

It might make sense to not think of a traversal as a join, but as it's own 
operation.

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryB)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>   on=“friend”
>  )
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Comment: was deleted

(was: bq. When using the scatter parameter will the nodes be marked as which 
group they fall into? What if a node falls into multiple groups (kinda related 
to #1 above)?

nodes will  be marked with the level of the traversal and the collection they 
came from.


bq. If gatherNodes is doing a 'join' between friends and articles I'd expect 
the tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. Though I guess 
it would be returning tuples from both the left and right streams whereas a 
standard intersect only returns tuples from the left stream. That said, it's 
not joining those tuples together.

It's a join but not similar to the other joins expressions which are done with 
a single search for the left and right streams. This a parallel batched nested 
loop join. So I'm not sure it expresses quite like the other joins. You can see 
the implementation in the ShortestPathStream. Looking at the implementation 
might spark some ideas of how to express it. I'm open to ideas.


bq. What could one do if they wished to build a graph using a subset of data in 
friends collection? Can they apply a filter on friends as part of the 
gatherNodes function? Perhaps they could be allowed to add fq filters.

The fq,and fl params will be supported. This will support filtering and 
listing/aggregating edge properties.)

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   

[jira] [Comment Edited] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233224#comment-15233224
 ] 

Joel Bernstein edited comment on SOLR-8925 at 4/9/16 12:54 AM:
---

bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

Duplicate nodes are removed by cycle detection. All the ancestors are tracked 
and are returned with the Tuple the *ancestors* field.


bq. When using the scatter parameter will the nodes be marked as which group 
they fall into? What if a node falls into multiple groups (kinda related to #1 
above)?

nodes will  be marked with the level of the traversal and the collection they 
came from.


bq. If gatherNodes is doing a 'join' between friends and articles I'd expect 
the tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. Though I guess 
it would be returning tuples from both the left and right streams whereas a 
standard intersect only returns tuples from the left stream. That said, it's 
not joining those tuples together.

It's a join but not similar to the other joins expressions which are done with 
a single search for the left and right streams. This a parallel batched nested 
loop join. So I'm not sure it expresses quite like the other joins. You can see 
the implementation in the ShortestPathStream. Looking at the implementation 
might spark some ideas of how to express it. I'm open to ideas.


bq. What could one do if they wished to build a graph using a subset of data in 
friends collection? Can they apply a filter on friends as part of the 
gatherNodes function? Perhaps they could be allowed to add fq filters.

The fq,and fl params will be supported. This will support filtering and 
listing/aggregating edge properties.



was (Author: joel.bernstein):
bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

Duplicate nodes are removed by cycle detection. All the ancestors are tracked 
and are returned with the Tuple the *ancestors* field.





> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be 

[jira] [Comment Edited] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233224#comment-15233224
 ] 

Joel Bernstein edited comment on SOLR-8925 at 4/9/16 12:55 AM:
---

bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

Duplicate nodes are removed by cycle detection. All the ancestors are tracked 
and are returned with the Tuple in the *ancestors* field.


bq. When using the scatter parameter will the nodes be marked as which group 
they fall into? What if a node falls into multiple groups (kinda related to #1 
above)?

nodes will  be marked with the level of the traversal and the collection they 
came from.


bq. If gatherNodes is doing a 'join' between friends and articles I'd expect 
the tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. Though I guess 
it would be returning tuples from both the left and right streams whereas a 
standard intersect only returns tuples from the left stream. That said, it's 
not joining those tuples together.

It's a join but not similar to the other joins expressions which are done with 
a single search for the left and right streams. This a parallel batched nested 
loop join. So I'm not sure it expresses quite like the other joins. You can see 
the implementation in the ShortestPathStream. Looking at the implementation 
might spark some ideas of how to express it. I'm open to ideas.


bq. What could one do if they wished to build a graph using a subset of data in 
friends collection? Can they apply a filter on friends as part of the 
gatherNodes function? Perhaps they could be allowed to add fq filters.

The fq,and fl params will be supported. This will support filtering and 
listing/aggregating edge properties.



was (Author: joel.bernstein):
bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

Duplicate nodes are removed by cycle detection. All the ancestors are tracked 
and are returned with the Tuple the *ancestors* field.


bq. When using the scatter parameter will the nodes be marked as which group 
they fall into? What if a node falls into multiple groups (kinda related to #1 
above)?

nodes will  be marked with the level of the traversal and the collection they 
came from.


bq. If gatherNodes is doing a 'join' between friends and articles I'd expect 
the tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. Though I guess 
it would be returning tuples from both the left and right streams whereas a 
standard intersect only returns tuples from the left stream. That said, it's 
not joining those tuples together.

It's a join but not similar to the other joins expressions which are done with 
a single search for the left and right streams. This a parallel batched nested 
loop join. So I'm not sure it expresses quite like the other joins. You can see 
the implementation in the ShortestPathStream. Looking at the implementation 
might spark some ideas of how to express it. I'm open to ideas.


bq. What could one do if they wished to build a graph using a subset of data in 
friends collection? Can they apply a filter on friends as part of the 
gatherNodes function? Perhaps they could be allowed to add fq filters.

The fq,and fl params will be supported. This will support filtering and 
listing/aggregating edge properties.


> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>  

[jira] [Commented] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233240#comment-15233240
 ] 

Joel Bernstein commented on SOLR-8925:
--

bq. When using the scatter parameter will the nodes be marked as which group 
they fall into? What if a node falls into multiple groups (kinda related to #1 
above)?

nodes will  be marked with the level of the traversal and the collection they 
came from.


bq. If gatherNodes is doing a 'join' between friends and articles I'd expect 
the tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. Though I guess 
it would be returning tuples from both the left and right streams whereas a 
standard intersect only returns tuples from the left stream. That said, it's 
not joining those tuples together.

It's a join but not similar to the other joins expressions which are done with 
a single search for the left and right streams. This a parallel batched nested 
loop join. So I'm not sure it expresses quite like the other joins. You can see 
the implementation in the ShortestPathStream. Looking at the implementation 
might spark some ideas of how to express it. I'm open to ideas.


bq. What could one do if they wished to build a graph using a subset of data in 
friends collection? Can they apply a filter on friends as part of the 
gatherNodes function? Perhaps they could be allowed to add fq filters.

The fq,and fl params will be supported. This will support filtering and 
listing/aggregating edge properties.

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>

[jira] [Commented] (SOLR-8740) set docValues="true" for most non-text fieldTypes in the sample schemas

2016-04-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233228#comment-15233228
 ] 

Yonik Seeley commented on SOLR-8740:


I think it's a good idea... there is a lot of performance to be gained out of 
using docValues when returning a couple of docValue fields for many documents 
(like the export handler currently does).  Although we *can* (and should at 
some point) have docValue implementations that preserve order, we don't have 
them yet... and the note leaves the door open to implementing optimizations 
before the next major release.

> set docValues="true" for most non-text fieldTypes in the sample schemas
> ---
>
> Key: SOLR-8740
> URL: https://issues.apache.org/jira/browse/SOLR-8740
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Yonik Seeley
> Fix For: master, 6.0
>
> Attachments: SOLR-8740.patch, SOLR-8740.patch
>
>
> We should consider switching to docValues for most of the non-text fields in 
> the sample schemas provided with solr.
> This may be a better default since it is more NRT friendly and acts to avoid 
> OOM errors due to large field cache or UnInvertedField entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16462 - Still Failing!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16462/
Java: 64bit/jdk1.8.0_72 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([781931A9A384013C:83E2038A131FA660]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4257 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestBackwardsCompatibility -Dtests.method=testUnsupportedOldIndexes 
-Dtests.seed=781931A9A384013C -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=zh-SG -Dtests.timezone=Pacific/Galapagos -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   2.33s J0 | 
TestBackwardsCompatibility.testUnsupportedOldIndexes <<<
   [junit4]> Throwable #1: java.io.IOException: Cannot find resource: 

[jira] [Comment Edited] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233224#comment-15233224
 ] 

Joel Bernstein edited comment on SOLR-8925 at 4/9/16 12:42 AM:
---

bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

Duplicate nodes are removed by cycle detection. All the ancestors are tracked 
and are returned with the Tuple the *ancestors* field.






was (Author: joel.bernstein):
bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryB)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>   on=“friend”
>  )
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233224#comment-15233224
 ] 

Joel Bernstein commented on SOLR-8925:
--

bq.  What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryB)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>   on=“friend”
>  )
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+113) - Build # 374 - Still Failing!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/374/
Java: 64bit/jdk-9-ea+113 -XX:-UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString

Error Message:
expected:<...at=0.772208221547936[6], lon=0.135607475210...> but 
was:<...at=0.772208221547936[7], lon=0.135607475210...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...at=0.772208221547936[6], 
lon=0.135607475210...> but was:<...at=0.772208221547936[7], 
lon=0.135607475210...>
at 
__randomizedtesting.SeedInfo.seed([8CE624077951DB23:A570FBF15BA343AF]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString(TestGeo3DPoint.java:812)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 8992 lines...]
   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4] IGNOR/A 0.01s J0 | TestGeo3DPoint.testRandomBig
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testShapeQueryToString 

Re: [CONF] Apache Solr Reference Guide > Upgrading a Solr Cluster

2016-04-08 Thread Chris Hostetter
:  
: Change comment: fix very minor error -- /ec/default instead of /etc/defaults. 
Also indicate that the solr.in.sh
: file will get a different name if the service name is changed. In a few 
places the text said "Solr 5". Removed the

Shawn: thanks for those edits -- but i do not think it's a good idea to 
include the extra note you added about solr.in.sh possibly having a diff 
name

There is already large warning about this at the top of the page.  It 
seems inconsistent to call this out in the 1 place you added it to,
unless we also call out every other place in this page (and on the "Taking Solr 
to 
Production" page) where using an alternate service name affects file names 
or commands (ie: every mention of solr.in.sh, every mention of /var/solr, 
ever mention of "sudo service solr ...")

The goal here should be to keep the upgrade steps simple, focus on the 
common case, and have minimal distractions for the uncommon cases.



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_72) - Build # 5763 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5763/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([6D95F078C6852458:966EC25B761E8304]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4307 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.00s J1 | 
TestBackwardsCompatibility.testCreateMoreTermsIndex
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J1 | 
TestBackwardsCompatibility.testCreateSingleSegmentNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J1 | 
TestBackwardsCompatibility.testCreateSingleSegmentCFS
   

[JENKINS] Lucene-Solr-SmokeRelease-6.0 - Build # 9 - Still Failing

2016-04-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.0/9/

No tests ran.

Build Log:
[...truncated 40030 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (20.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 28.5 MB in 0.03 sec (1099.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 62.9 MB in 0.06 sec (1117.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 73.6 MB in 0.07 sec (1110.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6045 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6045 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] Retrying download of url 
https://archive.apache.org/dist/lucene/java after exception: HTTP Error 503: 
Service Unavailable
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/dev-tools/scripts/smokeTestRelease.py",
 line 155, in load
   [smoker] content = 
urllib.request.urlopen(urlString).read().decode('utf-8')
   [smoker]   File "/usr/lib/python3.4/urllib/request.py", line 153, in urlopen
   [smoker] return opener.open(url, data, timeout)
   [smoker]   File "/usr/lib/python3.4/urllib/request.py", line 461, in open
   [smoker] response = meth(req, response)
   [smoker]   File "/usr/lib/python3.4/urllib/request.py", line 571, in 
http_response
   [smoker] 'http', request, response, code, msg, hdrs)
   [smoker]   File "/usr/lib/python3.4/urllib/request.py", line 499, in error
   [smoker] return self._call_chain(*args)
   [smoker]   File "/usr/lib/python3.4/urllib/request.py", line 433, in 
_call_chain
   [smoker] result = func(*args)
   [smoker]   File "/usr/lib/python3.4/urllib/request.py", line 579, in 
http_error_default
   [smoker] raise HTTPError(req.full_url, code, msg, hdrs, fp)
   [smoker] urllib.error.HTTPError: HTTP Error 503: Service Unavailable
   [smoker] 
   [smoker] During handling of the above exception, another exception occurred:
   [smoker] 
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/dev-tools/scripts/smokeTestRelease.py",
 line 1412, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/dev-tools/scripts/smokeTestRelease.py",
 line 1356, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/dev-tools/scripts/smokeTestRelease.py",
 line 590, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, 

[jira] [Commented] (SOLR-8740) set docValues="true" for most non-text fieldTypes in the sample schemas

2016-04-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233140#comment-15233140
 ] 

Ishan Chattopadhyaya commented on SOLR-8740:


[~yo...@apache.org], in the Solr 6.0 release notes, the following was mentioned:
{code}
Users should set useDocValuesAsStored="false" to preserve sort order on 
multi-valued
fields that have both stored="true" and docValues="true". 
{code}

Given that useDocValuesAsStored doesn't *currently* affect fields with 
stored=true, do you think this note was appropriately worded? I agree that in 
future, we might have useDocValuesAsStored affect how we retrieve stored=true 
fields, but whether that will affect the ordering is still not known (since we 
can, in theory, use another docValues format that preserves the original 
order). Though, even if future optimizations are potentially breaking for a 
user, the note mentioned above implies something that needs to be done for 
avoiding a current issue, not a future issue.

> set docValues="true" for most non-text fieldTypes in the sample schemas
> ---
>
> Key: SOLR-8740
> URL: https://issues.apache.org/jira/browse/SOLR-8740
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Yonik Seeley
> Fix For: master, 6.0
>
> Attachments: SOLR-8740.patch, SOLR-8740.patch
>
>
> We should consider switching to docValues for most of the non-text fields in 
> the sample schemas provided with solr.
> This may be a better default since it is more NRT friendly and acts to avoid 
> OOM errors due to large field cache or UnInvertedField entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7197) Geo3DPoint test failure

2016-04-08 Thread Karl Wright (JIRA)
Karl Wright created LUCENE-7197:
---

 Summary: Geo3DPoint test failure
 Key: LUCENE-7197
 URL: https://issues.apache.org/jira/browse/LUCENE-7197
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Affects Versions: master
Reporter: Karl Wright
Assignee: Karl Wright


Here's the test failure:

{code}
1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium

Error Message:
FAIL: id=174 should have matched but did not   
shape=GeoCompositeMembershipShape: {[GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.022713796927720124, 
lon=-0.5815768716211268], [lat=-0.7946950025059678, lon=0.4282667468610472], 
[lat=0.1408596217595416, lon=-0.6098466679977738], [lat=0.7965414531411178, 
lon=-1.508912143762057]], internalEdges={3}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.7965414531411178, 
lon=-1.508912143762057], [lat=-0.7033145536783815, lon=0.4464269851814595], 
[lat=1.0518471862927206, lon=-2.5948050766629582]], internalEdges={1, 2}, 
holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
[lat=1.0518471862927206, lon=-2.5948050766629582], [lat=0.7965414531411178, 
lon=-1.508912143762057]], internalEdges={1, 2}, holes=[]}, GeoConcavePolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
lon=2.6487090531266837], [lat=0.7965414531411178, lon=-1.508912143762057], 
[lat=0.022713796927720124, lon=-0.5815768716211268], [lat=-0.5523784795034061, 
lon=-0.05097322618399754]], internalEdges={0, 1}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=1.0518471862927206, 
lon=-2.5948050766629582], [lat=-0.7033145536783815, lon=0.4464269851814595], 
[lat=-0.32165871220450476, lon=-0.2306821921389016]], internalEdges={0}, 
holes=[]}]}   point=[X=0.026933631938481088, Y=-0.1374352774704889, 
Z=0.9879509190301249]   docID=164 deleted?=false   
query=PointInGeo3DShapeQuery: field=point: Shape: GeoCompositeMembershipShape: 
{[GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.022713796927720124, lon=-0.5815768716211268], 
[lat=-0.7946950025059678, lon=0.4282667468610472], [lat=0.1408596217595416, 
lon=-0.6098466679977738], [lat=0.7965414531411178, lon=-1.508912143762057]], 
internalEdges={3}, holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.7965414531411178, lon=-1.508912143762057], 
[lat=-0.7033145536783815, lon=0.4464269851814595], [lat=1.0518471862927206, 
lon=-2.5948050766629582]], internalEdges={1, 2}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
lon=2.6487090531266837], [lat=1.0518471862927206, lon=-2.5948050766629582], 
[lat=0.7965414531411178, lon=-1.508912143762057]], internalEdges={1, 2}, 
holes=[]}, GeoConcavePolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
[lat=0.7965414531411178, lon=-1.508912143762057], [lat=0.022713796927720124, 
lon=-0.5815768716211268], [lat=-0.5523784795034061, lon=-0.05097322618399754]], 
internalEdges={0, 1}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=1.0518471862927206, 
lon=-2.5948050766629582], [lat=-0.7033145536783815, lon=0.4464269851814595], 
[lat=-0.32165871220450476, lon=-0.2306821921389016]], internalEdges={0}, 
holes=[]}]}   explanation: target is in leaf _w(6.1.0):C23769 of full 
reader StandardDirectoryReader(segments:70:nrt _w(6.1.0):C23769) full BKD 
path to target doc:   Cell(x=-1.0011088352687925 TO -0.6552151159613697 
y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.997762292058209 TO 
-0.345618552495)   Cell(x=-1.0011088352687925 TO -0.6552151159613697 
y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.34561855287730175 TO 
-0.010034327274334781)   Cell(x=-0.6552151164275519 TO -0.49692303680890554 
y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.997762292058209 TO 
-0.010034327274334781)   Cell(x=-0.49692303727508774 TO 
-0.36821096452714464 y=-1.000763585323458 TO 2.587423617105566E-4 
z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.36821096499332684 TO -0.25063435608431966 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.25063435655050187 TO -0.14842841329097298 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.14842841375715526 TO -0.0052465148368604775 y=-1.000763585323458 TO 
-0.2511880118919554 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.14842841375715526 TO -0.0052465148368604775 y=-0.2511880123581376 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.005246515303042771 TO 0.34833399434806217 y=-1.000763585323458 TO 

[jira] [Commented] (SOLR-8955) ReplicationHandler should throttle across all requests instead of for each client

2016-04-08 Thread Greg Pendlebury (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233113#comment-15233113
 ] 

Greg Pendlebury commented on SOLR-8955:
---

I like the idea, but maybe it should be configurable? If the master has 
multiple NICs than hard coding an arbitrary limit because two unrelated slaves 
from different network interfaces are both online would actually be more of a 
hindrance than an improvement.

> ReplicationHandler should throttle across all requests instead of for each 
> client
> -
>
> Key: SOLR-8955
> URL: https://issues.apache.org/jira/browse/SOLR-8955
> Project: Solr
>  Issue Type: Improvement
>  Components: replication (java), SolrCloud
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-medium, newdev
> Fix For: master, 6.1
>
>
> SOLR-6485 added the ability to throttle the speed of replication but the 
> implementation rate limits each request. So e.g. the maxWriteMBPerSec is 1 
> and 5 slaves request full replication then the effective transfer rate from 
> the master is 5 MB/second which is not what is often desired.
> I propose to make the rate limit global (across all replication requests) 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+113) - Build # 16461 - Still Failing!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16461/
Java: 64bit/jdk-9-ea+113 -XX:+UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([E2BDE835E28094A:F5D0ECA0EEB3AE16]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 4260 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.01s J2 | TestBackwardsCompatibility.testCreateNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J2 | 
TestBackwardsCompatibility.testCreateSingleSegmentCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 122 - Still Failing

2016-04-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/122/

1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium

Error Message:
FAIL: id=174 should have matched but did not   
shape=GeoCompositeMembershipShape: {[GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.022713796927720124, 
lon=-0.5815768716211268], [lat=-0.7946950025059678, lon=0.4282667468610472], 
[lat=0.1408596217595416, lon=-0.6098466679977738], [lat=0.7965414531411178, 
lon=-1.508912143762057]], internalEdges={3}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.7965414531411178, 
lon=-1.508912143762057], [lat=-0.7033145536783815, lon=0.4464269851814595], 
[lat=1.0518471862927206, lon=-2.5948050766629582]], internalEdges={1, 2}, 
holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
[lat=1.0518471862927206, lon=-2.5948050766629582], [lat=0.7965414531411178, 
lon=-1.508912143762057]], internalEdges={1, 2}, holes=[]}, GeoConcavePolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
lon=2.6487090531266837], [lat=0.7965414531411178, lon=-1.508912143762057], 
[lat=0.022713796927720124, lon=-0.5815768716211268], [lat=-0.5523784795034061, 
lon=-0.05097322618399754]], internalEdges={0, 1}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=1.0518471862927206, 
lon=-2.5948050766629582], [lat=-0.7033145536783815, lon=0.4464269851814595], 
[lat=-0.32165871220450476, lon=-0.2306821921389016]], internalEdges={0}, 
holes=[]}]}   point=[X=0.026933631938481088, Y=-0.1374352774704889, 
Z=0.9879509190301249]   docID=164 deleted?=false   
query=PointInGeo3DShapeQuery: field=point: Shape: GeoCompositeMembershipShape: 
{[GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.022713796927720124, lon=-0.5815768716211268], 
[lat=-0.7946950025059678, lon=0.4282667468610472], [lat=0.1408596217595416, 
lon=-0.6098466679977738], [lat=0.7965414531411178, lon=-1.508912143762057]], 
internalEdges={3}, holes=[]}, GeoConvexPolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.7965414531411178, lon=-1.508912143762057], 
[lat=-0.7033145536783815, lon=0.4464269851814595], [lat=1.0518471862927206, 
lon=-2.5948050766629582]], internalEdges={1, 2}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=0.5472103826277904, 
lon=2.6487090531266837], [lat=1.0518471862927206, lon=-2.5948050766629582], 
[lat=0.7965414531411178, lon=-1.508912143762057]], internalEdges={1, 2}, 
holes=[]}, GeoConcavePolygon: {planetmodel=PlanetModel.WGS84, 
points=[[lat=0.5472103826277904, lon=2.6487090531266837], 
[lat=0.7965414531411178, lon=-1.508912143762057], [lat=0.022713796927720124, 
lon=-0.5815768716211268], [lat=-0.5523784795034061, lon=-0.05097322618399754]], 
internalEdges={0, 1}, holes=[]}, GeoConvexPolygon: 
{planetmodel=PlanetModel.WGS84, points=[[lat=1.0518471862927206, 
lon=-2.5948050766629582], [lat=-0.7033145536783815, lon=0.4464269851814595], 
[lat=-0.32165871220450476, lon=-0.2306821921389016]], internalEdges={0}, 
holes=[]}]}   explanation: target is in leaf _w(6.1.0):C23769 of full 
reader StandardDirectoryReader(segments:70:nrt _w(6.1.0):C23769) full BKD 
path to target doc:   Cell(x=-1.0011088352687925 TO -0.6552151159613697 
y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.997762292058209 TO 
-0.345618552495)   Cell(x=-1.0011088352687925 TO -0.6552151159613697 
y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.34561855287730175 TO 
-0.010034327274334781)   Cell(x=-0.6552151164275519 TO -0.49692303680890554 
y=-1.000763585323458 TO 2.587423617105566E-4 z=-0.997762292058209 TO 
-0.010034327274334781)   Cell(x=-0.49692303727508774 TO 
-0.36821096452714464 y=-1.000763585323458 TO 2.587423617105566E-4 
z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.36821096499332684 TO -0.25063435608431966 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.25063435655050187 TO -0.14842841329097298 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.14842841375715526 TO -0.0052465148368604775 y=-1.000763585323458 TO 
-0.2511880118919554 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.14842841375715526 TO -0.0052465148368604775 y=-0.2511880123581376 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.010034327274334781)   
Cell(x=-0.005246515303042771 TO 0.34833399434806217 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.997762292058209 TO -0.9797970594803297)   
Cell(x=-0.005246515303042771 TO 0.34833399434806217 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.9797970599465119 TO -0.9292097400113875)   
Cell(x=-0.005246515303042771 TO 0.34833399434806217 y=-1.000763585323458 TO 
2.587423617105566E-4 z=-0.9292097404775697 TO -0.704145502815162)   
Cell(x=-0.005246515303042771 TO 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+113) - Build # 373 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/373/
Java: 32bit/jdk-9-ea+113 -client -XX:+UseConcMarkSweepGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString

Error Message:
expected:<...at=0.772208221547936[6], lon=0.135607475210...> but 
was:<...at=0.772208221547936[7], lon=0.135607475210...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...at=0.772208221547936[6], 
lon=0.135607475210...> but was:<...at=0.772208221547936[7], 
lon=0.135607475210...>
at 
__randomizedtesting.SeedInfo.seed([65132136FA84AB7B:4C85FEC0D87633F7]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString(TestGeo3DPoint.java:812)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 9003 lines...]
   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testShapeQueryToString -Dtests.seed=65132136FA84AB7B 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=mk-MK 
-Dtests.timezone=Australia/LHI -Dtests.asserts=true 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 980 - Still Failing

2016-04-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/980/

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([805A91F9CE91B97:F3FE9B3C2C72BCCB]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 4820 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.01s J2 | TestBackwardsCompatibility.testCreateNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestBackwardsCompatibility -Dtests.method=testUnsupportedOldIndexes 
-Dtests.seed=805A91F9CE91B97 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 506 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/506/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 13148 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/usr/jdk/instances/jdk1.8.0/jre/bin/java -XX:-UseCompressedOops 
-XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=90DB458EACC164CB -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/temp
 -Dcommon.dir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene 
-Dclover.db.dir=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/build/clover/db
 
-Djava.security.policy=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

[jira] [Commented] (LUCENE-7184) Add GeoEncodingUtils to core

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232853#comment-15232853
 ] 

ASF subversion and git services commented on LUCENE-7184:
-

Commit 455f3dd694c431d9391a910d054d6a599dff59d4 in lucene-solr's branch 
refs/heads/master from nknize
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=455f3dd ]

LUCENE-7184: update CHANGES.txt


> Add GeoEncodingUtils to core
> 
>
> Key: LUCENE-7184
> URL: https://issues.apache.org/jira/browse/LUCENE-7184
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: master, 6.1
>
> Attachments: LUCENE-7184.patch, LUCENE-7184.patch, LUCENE-7184.patch
>
>
> This is part 1 for LUCENE-7165. This task will add a {{GeoEncodingUtils}} 
> helper class to {{o.a.l.geo}} in the core module for reusing lat/lon encoding 
> methods. Existing encoding methods in {{LatLonPoint}} will be refactored to 
> the new helper class so a new numerically stable morton encoding can be added 
> that reuses the same encoding methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232845#comment-15232845
 ] 

ASF subversion and git services commented on SOLR-8097:
---

Commit f479f16d3a8b57126560c19c57885a103360f1c3 in lucene-solr's branch 
refs/heads/branch_6x from [~anshum]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f479f16 ]

SOLR-8097: Implement builder pattern design for constructing SolrJ clients and 
deprecate direct construction of clients


> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7163) Refactor GeoRect and Polygon to core

2016-04-08 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize resolved LUCENE-7163.

Resolution: Fixed

> Refactor GeoRect and Polygon to core
> 
>
> Key: LUCENE-7163
> URL: https://issues.apache.org/jira/browse/LUCENE-7163
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-7163.patch
>
>
> {{o.a.l.spatial.util.GeoRect}} and {{o.a.l.spatial.util.Polygon}} are 
> reusable classes across multiple lucene modules. It makes sense for them to 
> be moved to the {{o.a.l.geo}} package in the core module so they're exposed 
> across multiple modules.
> {{GeoRect}} should also be refactored to something more straightforward, like 
> {{Rectangle}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7195) GeoPolygon construction sometimes inexplicably chooses concave polygons when order of points is clockwise

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232820#comment-15232820
 ] 

ASF subversion and git services commented on LUCENE-7195:
-

Commit 30d612f84ed71b00ef981c24dd77d305b326a7e8 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=30d612f ]

LUCENE-7195: Clockwise/counterclockwise detection was rotating coordinates in 
the wrong direction.


> GeoPolygon construction sometimes inexplicably chooses concave polygons when 
> order of points is clockwise
> -
>
> Key: LUCENE-7195
> URL: https://issues.apache.org/jira/browse/LUCENE-7195
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7195.patch
>
>
> The following input generates the following polygon, which is backwards from 
> the correct sense:
> {code}
> MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
> radiusMeters=44832.90297079173 gons=10
>   polyLats=[51.20438285996, 50.89947531437482, 
> 50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
> 51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
> 51.20438285996]
>   polyLons=[0.8747711978759765, 0.6509219832137298, 0.35960265165247807, 
> 0.10290284834752167, -0.18841648321373008, -0.41226569787597667, 
> -0.18960465285650027, 0.10285893781346236, 0.35964656218653757, 
> 0.6521101528565002, 0.8747711978759765]
>  --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
> GeoCompositeMembershipShape: {[GeoConcavePolygon:
> {planetmodel=PlanetModel.WGS84, points=
> [[lat=0.899021779599662, lon=0.011381469253029434],
>  [lat=0.9005818372758149, lon=0.006277016653633617],
>  [lat=0.9005818372758149, lon=0.0017952271299490152],
>  [lat=0.899021779599662, lon=-0.003309225469446801],
>  [lat=0.8936850723587506, lon=-0.007195393820967987],
>  [lat=0.8883634317734164, lon=-0.0032884879971082164],
>  [lat=0.8867906661272461, lon=0.0017959935133446592],
>  [lat=0.8867906661272461, lon=0.006276250270237971],
>  [lat=0.8883634317734164, lon=0.011360731780690846],
>  [lat=0.8936850723587506, lon=0.015267637604550617]], internalEdges={}, 
> holes=[]}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7195) GeoPolygon construction sometimes inexplicably chooses concave polygons when order of points is clockwise

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232807#comment-15232807
 ] 

ASF subversion and git services commented on LUCENE-7195:
-

Commit 4c4730484d27d49223ce1841ed022f1d7550fbc1 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c47304 ]

LUCENE-7195: Clockwise/counterclockwise detection was rotating coordinates in 
the wrong direction.


> GeoPolygon construction sometimes inexplicably chooses concave polygons when 
> order of points is clockwise
> -
>
> Key: LUCENE-7195
> URL: https://issues.apache.org/jira/browse/LUCENE-7195
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7195.patch
>
>
> The following input generates the following polygon, which is backwards from 
> the correct sense:
> {code}
> MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
> radiusMeters=44832.90297079173 gons=10
>   polyLats=[51.20438285996, 50.89947531437482, 
> 50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
> 51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
> 51.20438285996]
>   polyLons=[0.8747711978759765, 0.6509219832137298, 0.35960265165247807, 
> 0.10290284834752167, -0.18841648321373008, -0.41226569787597667, 
> -0.18960465285650027, 0.10285893781346236, 0.35964656218653757, 
> 0.6521101528565002, 0.8747711978759765]
>  --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
> GeoCompositeMembershipShape: {[GeoConcavePolygon:
> {planetmodel=PlanetModel.WGS84, points=
> [[lat=0.899021779599662, lon=0.011381469253029434],
>  [lat=0.9005818372758149, lon=0.006277016653633617],
>  [lat=0.9005818372758149, lon=0.0017952271299490152],
>  [lat=0.899021779599662, lon=-0.003309225469446801],
>  [lat=0.8936850723587506, lon=-0.007195393820967987],
>  [lat=0.8883634317734164, lon=-0.0032884879971082164],
>  [lat=0.8867906661272461, lon=0.0017959935133446592],
>  [lat=0.8867906661272461, lon=0.006276250270237971],
>  [lat=0.8883634317734164, lon=0.011360731780690846],
>  [lat=0.8936850723587506, lon=0.015267637604550617]], internalEdges={}, 
> holes=[]}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7195) GeoPolygon construction sometimes inexplicably chooses concave polygons when order of points is clockwise

2016-04-08 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7195.
-
   Resolution: Fixed
Fix Version/s: 6.x
   master

> GeoPolygon construction sometimes inexplicably chooses concave polygons when 
> order of points is clockwise
> -
>
> Key: LUCENE-7195
> URL: https://issues.apache.org/jira/browse/LUCENE-7195
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7195.patch
>
>
> The following input generates the following polygon, which is backwards from 
> the correct sense:
> {code}
> MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
> radiusMeters=44832.90297079173 gons=10
>   polyLats=[51.20438285996, 50.89947531437482, 
> 50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
> 51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
> 51.20438285996]
>   polyLons=[0.8747711978759765, 0.6509219832137298, 0.35960265165247807, 
> 0.10290284834752167, -0.18841648321373008, -0.41226569787597667, 
> -0.18960465285650027, 0.10285893781346236, 0.35964656218653757, 
> 0.6521101528565002, 0.8747711978759765]
>  --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
> GeoCompositeMembershipShape: {[GeoConcavePolygon:
> {planetmodel=PlanetModel.WGS84, points=
> [[lat=0.899021779599662, lon=0.011381469253029434],
>  [lat=0.9005818372758149, lon=0.006277016653633617],
>  [lat=0.9005818372758149, lon=0.0017952271299490152],
>  [lat=0.899021779599662, lon=-0.003309225469446801],
>  [lat=0.8936850723587506, lon=-0.007195393820967987],
>  [lat=0.8883634317734164, lon=-0.0032884879971082164],
>  [lat=0.8867906661272461, lon=0.0017959935133446592],
>  [lat=0.8867906661272461, lon=0.006276250270237971],
>  [lat=0.8883634317734164, lon=0.011360731780690846],
>  [lat=0.8936850723587506, lon=0.015267637604550617]], internalEdges={}, 
> holes=[]}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7181) JapaneseTokenizer: Validate segmentation of User Dictionary entries on creation

2016-04-08 Thread Christian Moen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Moen reassigned LUCENE-7181:
--

Assignee: Christian Moen

> JapaneseTokenizer: Validate segmentation of User Dictionary entries on 
> creation
> ---
>
> Key: LUCENE-7181
> URL: https://issues.apache.org/jira/browse/LUCENE-7181
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>Assignee: Christian Moen
> Attachments: LUCENE-7181.patch
>
>
> From the [conversation on the dev 
> list|http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3CCAMJgJxR8gLnXi7WXkN3KFfxHu=posevxxarbbg+chce1tzh...@mail.gmail.com%3E]
> The user dictionary in the {{JapaneseTokenizer}} allows users to customize 
> how a stream is broken into tokens using a specific set of rules provided 
> like: 
> AABBBCC -> AA BBB CC
> It does not allow users to change any of the token characters like:
> (1) AABBBCC -> DD BBB CC   (this will just tokenize to "AA", "BBB", "CC", 
> seems to only care about positions) 
> It also doesn't let a character be part of more than one token, like:
> (2) AABBBCC -> AAB BBB BCC (this will throw an AIOOBE)
> ..or make the output token bigger than the input text: 
> (3) AA -> AAA (Also AIOOBE)
> Currently there is no validation for those cases, case 1 doesn't fail but 
> provide unexpected tokens. Cases 2 and 3 fail when the input text is 
> analyzed. We should add validation to the {{UserDictionary}} creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232782#comment-15232782
 ] 

Dennis Gove edited comment on SOLR-8925 at 4/8/16 7:50 PM:
---

I like this. Just a couple of questions.

1. What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

2. When using the scatter parameter will the nodes be marked as which group 
they fall into? What if a node falls into multiple groups (kinda related to #1 
above)?

3. Will a node include information about its source, ie - why it's included in 
a graph?

4. If gatherNodes is doing a 'join' between friends and articles I'd expect the 
tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. Though I guess 
it would be returning tuples from both the left and right streams whereas a 
standard intersect only returns tuples from the left stream. That said, it's 
not joining those tuples together.

5. What could one do if they wished to build a graph using a subset of data in 
friends collection? Can they apply a filter on friends as part of the 
gatherNodes function? Perhaps they could be allowed to add fq filters.


was (Author: dpgove):
I like this. Just a couple of questions.

1. What does this do with duplicate nodes? ie, overlapping friend networks. 
Will it prune those out, show the node twice, mark a node has having multiple 
sources?

2. When using the scatter parameter will the nodes be marked as which group 
they fall into? What if a node falls into multiple groups (kinda related to #1 
above)?

3. Will a node include information about its source, ie - why it's included in 
a graph?

4. If gatherNodes is doing a 'join' between friends and articles I'd expect the 
tuple to be a join of the tuple found in articles and the tuple found in 
friends. But if "The inner gatherNodes() expression then emits the friend 
Tuples" I believe this is more of an intersect. Ie, give me tuples in friends 
which also appear in articles, using the author->user equalitor. 

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+112-patched) - Build # 16460 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16460/
Java: 64bit/jdk-9-ea+112-patched -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes

Error Message:
Cannot find resource: unsupported.6.0.0-cfs.zip

Stack Trace:
java.io.IOException: Cannot find resource: unsupported.6.0.0-cfs.zip
at 
__randomizedtesting.SeedInfo.seed([EE2C3E8479D1738A:15D70CA7C94AD4D6]:0)
at 
org.apache.lucene.util.LuceneTestCase.getDataInputStream(LuceneTestCase.java:1940)
at 
org.apache.lucene.index.TestBackwardsCompatibility.testUnsupportedOldIndexes(TestBackwardsCompatibility.java:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 4262 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBackwardsCompatibility
   [junit4] IGNOR/A 0.03s J2 | TestBackwardsCompatibility.testCreateCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 
-Dtests.bwcdir=/path/to/write/indexes
   [junit4] IGNOR/A 0.00s J2 | TestBackwardsCompatibility.testCreateNoCFS
   [junit4]> Assumption #1: backcompat creation tests must be run with 

[jira] [Commented] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232800#comment-15232800
 ] 

Dennis Gove commented on SOLR-8925:
---

The order in the walk parameter might be confusing. 
{code}
walk ="author->user”,
{code}
In other expressions where we're checking equality between two streams we use a 
standard of firstStreamField = secondStreamField. In gatherNodes, the field on 
the right appears to go with the first stream while the field on the left goes 
with the second stream. I'm not suggesting I don't like the author->user 
structure, because I do, but perhaps that the use of collection as the first 
param might lead to confusion.

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryB)”, fl=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>   on=“friend”
>  )
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7181) JapaneseTokenizer: Validate segmentation of User Dictionary entries on creation

2016-04-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232686#comment-15232686
 ] 

Tomás Fernández Löbbe commented on LUCENE-7181:
---

[~cm], any thoughts on the patch?

> JapaneseTokenizer: Validate segmentation of User Dictionary entries on 
> creation
> ---
>
> Key: LUCENE-7181
> URL: https://issues.apache.org/jira/browse/LUCENE-7181
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
> Attachments: LUCENE-7181.patch
>
>
> From the [conversation on the dev 
> list|http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3CCAMJgJxR8gLnXi7WXkN3KFfxHu=posevxxarbbg+chce1tzh...@mail.gmail.com%3E]
> The user dictionary in the {{JapaneseTokenizer}} allows users to customize 
> how a stream is broken into tokens using a specific set of rules provided 
> like: 
> AABBBCC -> AA BBB CC
> It does not allow users to change any of the token characters like:
> (1) AABBBCC -> DD BBB CC   (this will just tokenize to "AA", "BBB", "CC", 
> seems to only care about positions) 
> It also doesn't let a character be part of more than one token, like:
> (2) AABBBCC -> AAB BBB BCC (this will throw an AIOOBE)
> ..or make the output token bigger than the input text: 
> (3) AA -> AAA (Also AIOOBE)
> Currently there is no validation for those cases, case 1 doesn't fail but 
> provide unexpected tokens. Cases 2 and 3 fail when the input text is 
> analyzed. We should add validation to the {{UserDictionary}} creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8961) TestMiniSolrCloudCluster should move into test-framework

2016-04-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-8961:
--

 Summary: TestMiniSolrCloudCluster should move into test-framework
 Key: SOLR-8961
 URL: https://issues.apache.org/jira/browse/SOLR-8961
 Project: Solr
  Issue Type: Test
Reporter: Hoss Man
Assignee: Hoss Man


* MiniSolrCloudCluster was designed to be a "cloud helper class" for writting 
cloud based tests.
* TestMiniSolrCloudCluster was designed to be a "test the cloud helper class" 
type of test, verifying that MiniSolrCloudCluster would behave the the 
documented/expected ways, so people could be confident in writting tests using 
it.

But because TestMiniSolrCloudCluster currently lives in the solr-core test 
package, it's easy to confuse it for a "test solr using a cloud helper class" 
test, that people might try adding tests of core solr functionality to (see 
SOLR-8959)

We should move this test so it's actaully part of the test-framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7163) Refactor GeoRect and Polygon to core

2016-04-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232565#comment-15232565
 ] 

Michael McCandless commented on LUCENE-7163:


Can this be closed now [~nknize]?

> Refactor GeoRect and Polygon to core
> 
>
> Key: LUCENE-7163
> URL: https://issues.apache.org/jira/browse/LUCENE-7163
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-7163.patch
>
>
> {{o.a.l.spatial.util.GeoRect}} and {{o.a.l.spatial.util.Polygon}} are 
> reusable classes across multiple lucene modules. It makes sense for them to 
> be moved to the {{o.a.l.geo}} package in the core module so they're exposed 
> across multiple modules.
> {{GeoRect}} should also be refactored to something more straightforward, like 
> {{Rectangle}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8960) solr's test-framework should have a "src/test" directory as a place to put "test the tes herlper code" style tests

2016-04-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-8960:
--

 Summary: solr's test-framework should have a "src/test" directory 
as a place to put "test the tes herlper code" style tests
 Key: SOLR-8960
 URL: https://issues.apache.org/jira/browse/SOLR-8960
 Project: Solr
  Issue Type: Test
Reporter: Hoss Man
Assignee: Hoss Man


currently there is a smattering of test classes in solr-core that exist not to 
test solr itself, but to test that the base classes and helper code in 
test-framework behave as designed.

we should create a solr/test-framework/src/test directory for these to live in, 
just like the lucene/test-framework/src/test directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7159) improve spatial point/rect vs. polygon performance

2016-04-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232555#comment-15232555
 ] 

Michael McCandless commented on LUCENE-7159:


Can this be closed now @rcmuir?

> improve spatial point/rect vs. polygon performance
> --
>
> Key: LUCENE-7159
> URL: https://issues.apache.org/jira/browse/LUCENE-7159
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7159.patch, LUCENE-7159.patch
>
>
> Now that we can query on complex polygons without going OOM (LUCENE-7153), we 
> should do something to address the current  performance.
> Currently, we use a basic crossings test ({{O\(n)}}) for boundary cases. We 
> defer these expensive per-doc checks on boundary cases to a two phase 
> iterator (LUCENE-7019, LUCENE-7109), so that it can be avoided if e.g. 
> excluded by filters, conjunctions, deleted doc, and so on. This is currently 
> important for performance, but basically its shoving the problem under the 
> rug and hoping it goes away. At least for point in poly, there are a number 
> of faster techniques described here: 
> http://erich.realtimerendering.com/ptinpoly/
> Additionally I am not sure how costly our "tree traversal" (rectangle 
> intersection algorithms). Maybe its nothing to be worried about, but likely 
> it too gets bad if the thing gets complex enough. These don't need to be 
> perfect but need to behave like java's Shape#contains (can conservatively 
> return false), and Shape#intersects (can conservatively return true). Of 
> course, if they are too inaccurate, then things can get slower.
> In cases of precomputed structures we should also consider memory usage: e.g. 
> we shouldn't make a horrible tradeoff there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8959) TestMiniSolrCloudCluster.testSegmentTerminateEarly should be refactored into it's own SolrCloudTestCase subclass

2016-04-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-8959:
--

 Summary: TestMiniSolrCloudCluster.testSegmentTerminateEarly should 
be refactored into it's own SolrCloudTestCase subclass
 Key: SOLR-8959
 URL: https://issues.apache.org/jira/browse/SOLR-8959
 Project: Solr
  Issue Type: Test
Reporter: Hoss Man


The functionality tested in testSegmentTerminateEarly (and the helper code in 
"SegmentTerminateEarlyTestState") should really belong in it's own test class, 
which can subclass SolrCloudTestCase.

(I suspect the only reason this wasn't done initially in SOLR-5730 is because 
maybe the patch predates the creation of SolrCloudTestCase?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fl=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in 

[jira] [Assigned] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8925:


Assignee: Joel Bernstein

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to authors found with 
> different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fi=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryB)”, fi=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>   on=“friend”
>  )
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to authors found with 
different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fi=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fi=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in 

[jira] [Resolved] (SOLR-4509) Move to non deprecated HttpClient impl classes to remove stale connection check on every request and move connection lifecycle management towards the client.

2016-04-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4509.
---
Resolution: Fixed

> Move to non deprecated HttpClient impl classes to remove stale connection 
> check on every request and move connection lifecycle management towards the 
> client.
> -
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master
>
> Attachments: 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> IsStaleTime.java, SOLR-4509-4_4_0.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Fix Version/s: 6.1

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Fix For: 6.1
>
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fl=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection by performing a distributed 
> join between articles.author and friends.user field.  It gathers the value 
> from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case are 
> the *friend* tuples.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of friends.
> 5) The outer gatherNodes() expression emits the entire graph that was 
> collected. This is controlled by the "scatter" parameter. In the example the 
> *root* nodes are the authors, the *branches* are the author's friends and the 
> *leaves* are the friend of friend of friends.
> This traversal is fully distributed and cross collection.
> Like all streaming expressions the gather nodes expression can be combined 
> with other streaming expressions. For example the following expression uses a 
> hashJoin to intersect the network of friends rooted to different queries:
> {code}
> hashInnerJoin(
>   gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryA)”, fi=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>gatherNodes(friends,
>   gatherNodes(friends
>   search(articles, 
> q=“body:(queryB)”, fi=“author”),
>   walk ="author->user”,
>   gather="friend"),
>   walk=“friend -> user”,
>   gather="friend",
>   scatter=“branches, leaves”),
>   on=“friend”
>  )
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fi=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fi=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”)
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friend of friends.

This traversal is fully distributed and cross collection.

Like all streaming expressions the gather nodes expression can be combined with 
other streaming expressions. For example the following expression uses a 
hashJoin to intersect the network of friends rooted to different queries:

{code}
hashInnerJoin(
  gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryA)”, fi=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
   gatherNodes(friends,
  gatherNodes(friends
  search(articles, 
q=“body:(queryB)”, fi=“author”),
  walk ="author->user”,
  gather="friend"),
  walk=“friend -> user”,
  gather="friend",
  scatter=“branches, leaves”),
  on=“friend”
 )
{code}




  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
*root* nodes are the authors, the *branches* are the author's friends and the 
*leaves* are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
root nodes are the *authors*, the *branches* are the author's friends and the 
leaves are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  



> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection by performing a distributed join 
between articles.author and friends.user field.  It gathers the value from the 
*friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case are 
the *friend* tuples.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
root nodes are the *authors*, the *branches* are the author's friends and the 
leaves are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection and by performing a distributed join 
between the articles.author and friends.user field.  It gathers the value from 
the *friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case is 
the *friend* tuple.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
root nodes are the *authors*, the *branches* are the author's friends and the 
leaves are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  



> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fl=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection and by performing a distributed join 
between the articles.author and friends.user field.  It gathers the value from 
the *friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case is 
the *friend* tuple.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
root nodes are the *authors*, the *branches* are the author's friends and the 
leaves are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fi=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection and by performing a distributed join 
between the articles.author and friends.user field.  It gathers the value from 
the *friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case is 
the *friend* tuple.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
root nodes are the *authors*, the *branches* are the author's friends and the 
leaves are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  



> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, 

[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fi=“author”),
walk ="author->user”,
gather="friend"),
                       walk=“friend-> user”,
   gather="friend",
   scatter=“roots, branches, leaves”
)
{code}


The expression above is evaluated as follows:

1) The inner search() expression is evaluated on the *articles* collection, 
emitting a Stream of Tuples with the author field populated.
2) The inner gatherNodes() expression reads the Tuples form the search() stream 
and traverses to the *friends* collection and by performing a distributed join 
between the articles.author and friends.user field.  It gathers the value from 
the *friend* field during the join.
3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
default the gatherNodes function emits only the leaves which in this case is 
the *friend* tuple.
4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
again in the "friends" collection, this time performing the join between 
*friend* Tuples  emitted in step 3. This collects the friend of friends.
5) The outer gatherNodes() expression emits the entire graph that was 
collected. This is controlled by the "scatter" parameter. In the example the 
root nodes are the *authors*, the *branches* are the author's friends and the 
leaves are the friend of friend of friends.

This traversal is fully distributed and cross collection.


  


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fi=“author”),
                                            from=“author”,
walk ="user->friend”),
   from=“friend”,
                       walk=“user -> friend”,
   scatter=“branches, leaves”
)
{code}



> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fi=“author”),
>   walk ="author->user”,
> gather="friend"),
>                        walk=“friend-> user”,
>gather="friend",
>scatter=“roots, branches, leaves”
> )
> {code}
> The expression above is evaluated as follows:
> 1) The inner search() expression is evaluated on the *articles* collection, 
> emitting a Stream of Tuples with the author field populated.
> 2) The inner gatherNodes() expression reads the Tuples form the search() 
> stream and traverses to the *friends* collection and by performing a 
> distributed join between the articles.author and friends.user field.  It 
> gathers the value from the *friend* field during the join.
> 3) The inner gatherNodes() expression then emits the *friend* Tuples. By 
> default the gatherNodes function emits only the leaves which in this case is 
> the *friend* tuple.
> 4) The outer gatherNodes() expression reads the *friend* Tuples and Traverses 
> again in the "friends" collection, this time performing the join between 
> *friend* Tuples  emitted in step 3. This collects the friend of 

[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2016-04-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232476#comment-15232476
 ] 

Gérald Quaire commented on SOLR-6246:
-

Hello,

I met this issue in my project. I need to reload the Solr Core after modifying 
the configuration of the suggester via Solrj. In my suggester, I 'm using an 
AnalyzingInfixSuggester as lookup algorithm. At each reload command, the 
"LockObtainFailedException" exception rises. 
To avoid this problem, I have overloaded the SuggestComponent and the 
SolrSuggester classes in order to introduce a static map that stores the 
suggesters already created for the current core and the current composant. My 
SolrSuggester  is now implemented the Closeable interface to call the close 
method of the lookup object. 
So when the core is reloading, the SuggestComponent first gets the suggesters 
created previously by this core and closes all suggesters. And then, it can 
create the new Suggester instances. Here is an excerpt of the code in the 
SuggestComponent:

protected static Map> CoreSuggesters = 
new ConcurrentHashMap<>();
...
   @Override
public void inform(SolrCore core) {
if (initParams != null) {
LOG.info("Initializing SuggestComponent");

   CoreSuggesters.computeIfPresent(core.getName() + this.getName(), (K, 
map) -> {
if (map != null) {
for (SolrSuggester suggest : map.values()) {
try {
suggest.close();
} catch (IOException e) {
LOG.warn("Could not close the suggester.", e);
}
}
map.clear();
}
return null;
});

  // Initialize the new suggesters here
...
   CoreSuggesters.putIfAbsent(core.getName() + this.getName(), 
suggesters);
core.addCloseHook(new CloseHook() {
@Override
public void preClose(SolrCore core) {
CoreSuggesters.computeIfPresent(core.getName() + 
internalName, (K, map) -> {
if (map != null) {
for (SolrSuggester suggest : map.values()) {
try {
suggest.close();
} catch (IOException e) {
LOG.warn("Could not close the suggester.", 
e);
}
}
map.clear();
}
return null;
});
} // end of the inform method

It was painful to make the overloadingbecause the classes SuggestComponent and 
SolrSuggester are not written to be extended. 
This code has fixed my issue for now. I don't know if it is a clean solution (I 
don't think so), but it seems working. I hope this trick will be helpful. 

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(
  friends,
  gatherNodes(
   friends,
search(articles, q=“body:(query 
1)”, fi=“author”),
                                            from=“author”,
walk ="user->friend”),
   from=“friend”,
                       walk=“user -> friend”,
   scatter=“branches, leaves”
)
{code}


  was:
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(friends,
                      gatherNodes(friends
search(articles, q=“body:(query 
1)”, fi=“author”),
                                            from=“author”,
walk ="user->friend”),
   from=“friend”,
                       walk=“user -> friend”,
   scatter=“branches, leaves”)
{code}



> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(
>   friends,
>   gatherNodes(
>friends,
> search(articles, q=“body:(query 
> 1)”, fi=“author”),
>                                             from=“author”,
>   walk ="user->friend”),
>from=“friend”,
>                        walk=“user -> friend”,
>scatter=“branches, leaves”
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is a flexible general purpose breadth 
first graph traversal. It uses the same parallel join under the covers as 
(SOLR-) but is much more generalized and can be used for a wide range of 
use cases.

Sample syntax:

{code}
gatherNodes(friends,
                      gatherNodes(friends
search(articles, q=“body:(query 
1)”, fi=“author”),
                                            from=“author”,
walk ="user->friend”),
   from=“friend”,
                       walk=“user -> friend”,
   scatter=“branches, leaves”)
{code}


  was:
The gatherNodes Streaming Expression is flexible general purpose breadth first 
graph traversal. It uses the same parallel join under the covers to perform the 
dis

Sample syntax:

{code}
gatherNodes(friends,
                        gatherNodes(friends
  search(articles, 
q=“body:(query 1)”, fi=“author”),
                                              from=“author”,
  walk ="user->friend”),
   from=“friend”,
                       walk=“user -> friend”,
   scatter=“branches, leaves”)
{code}



> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is a flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers as 
> (SOLR-) but is much more generalized and can be used for a wide range of 
> use cases.
> Sample syntax:
> {code}
> gatherNodes(friends,
>                       gatherNodes(friends
>   search(articles, q=“body:(query 
> 1)”, fi=“author”),
>                                             from=“author”,
>   walk ="user->friend”),
>from=“friend”,
>                        walk=“user -> friend”,
>scatter=“branches, leaves”)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: 
The gatherNodes Streaming Expression is flexible general purpose breadth first 
graph traversal. It uses the same parallel join under the covers to perform the 
dis

Sample syntax:

{code}
gatherNodes(friends,
                        gatherNodes(friends
  search(articles, 
q=“body:(query 1)”, fi=“author”),
                                              from=“author”,
  walk ="user->friend”),
   from=“friend”,
                       walk=“user -> friend”,
   scatter=“branches, leaves”)
{code}


  was:As SOLR- is close to wrapping up, we can use the same parallel join 
approach to create a VerticesStream. The VerticesStream can be wrapped by other 
Streams to perform operations over the stream and will also provide basic 
vertex iteration capabilities for a Tinkerpop/Gremlin implementation.


> Add gatherNodes Streaming Expression to breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers to 
> perform the dis
> Sample syntax:
> {code}
> gatherNodes(friends,
>                         gatherNodes(friends
> search(articles, 
> q=“body:(query 1)”, fi=“author”),
>                                               
> from=“author”,
> walk ="user->friend”),
>from=“friend”,
>                        walk=“user -> friend”,
>scatter=“branches, leaves”)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to support breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Summary: Add gatherNodes Streaming Expression to support breadth first 
traversals  (was: Add gatherNodes Streaming Expression to breadth first 
traversals)

> Add gatherNodes Streaming Expression to support breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> The gatherNodes Streaming Expression is flexible general purpose breadth 
> first graph traversal. It uses the same parallel join under the covers to 
> perform the dis
> Sample syntax:
> {code}
> gatherNodes(friends,
>                         gatherNodes(friends
> search(articles, 
> q=“body:(query 1)”, fi=“author”),
>                                               
> from=“author”,
> walk ="user->friend”),
>from=“friend”,
>                        walk=“user -> friend”,
>scatter=“branches, leaves”)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add gatherNodes Streaming Expression to breadth first traversals

2016-04-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Summary: Add gatherNodes Streaming Expression to breadth first traversals  
(was: Add VerticesStream to support vertex iteration)

> Add gatherNodes Streaming Expression to breadth first traversals
> 
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> As SOLR- is close to wrapping up, we can use the same parallel join 
> approach to create a VerticesStream. The VerticesStream can be wrapped by 
> other Streams to perform operations over the stream and will also provide 
> basic vertex iteration capabilities for a Tinkerpop/Gremlin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7729) ConcurrentUpdateSolrClient ignoring the collection parameter in some methods

2016-04-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232462#comment-15232462
 ] 

Mark Miller commented on SOLR-7729:
---

The problem is that the client is not smart about this at all. If you start it 
with a collection url and then choose a collection, the behavior is no good. 
This is a common way to init SolrJ clients, so that API was not very well 
thought out to begin with IMO.

> ConcurrentUpdateSolrClient ignoring the collection parameter in some methods
> 
>
> Key: SOLR-7729
> URL: https://issues.apache.org/jira/browse/SOLR-7729
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.1
>Reporter: Jorge Luis Betancourt Gonzalez
>Assignee: Mark Miller
>  Labels: client, solrj
> Attachments: SOLR-7729-ConcurrentUpdateSolrClient-collection.patch, 
> SOLR-7729.patch
>
>
> Some of the methods in {{ConcurrentUpdateSolrClient}} accept an aditional 
> {{collection}} parameter, some of this methods are: {{add(String collection, 
> SolrInputDocument doc)}} and {{request(SolrRequest, String collection)}}. 
> This collection parameter is being ignored in this cases but works for others 
> like {{commit(String collection)}}.
> [~elyograg] noted that:
> {quote} 
> Looking into how an update request actually gets added to the background
> queue in ConcurrentUpdateSolrClient, it appears that the "collection"
> information is ignored before the request is added to the queue.
> {quote}
> From the source, when a commit is issued or the 
> {{UpdateParams.WAIT_SEARCHER}} is set in the request params the collection 
> parameter is used, otherwise the request {{UpdateRequest req}} is queued 
> without any regarding of the collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7174) Upgrade randomizedtesting to 2.3.4

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232450#comment-15232450
 ] 

ASF subversion and git services commented on LUCENE-7174:
-

Commit ebb2127cca54e49ed5c7462d11ee8dda6125e287 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ebb2127 ]

LUCENE-7174: IntelliJ config: switch JUnit library to include all jars under 
lucene/test-framework/lib/, rather than using the exact jar names, which is 
brittle, and causes trouble when people forget to update when jars are upgraded 
(like on this issue)


> Upgrade randomizedtesting to 2.3.4
> --
>
> Key: LUCENE-7174
> URL: https://issues.apache.org/jira/browse/LUCENE-7174
> Project: Lucene - Core
>  Issue Type: Test
>  Components: general/test
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: master, 6.1, 6.x
>
>
> The new version has better output of static leak detector, so you are able to 
> figure out which field caused the InaccessibleObjectException or 
> AccessControlException.
> https://github.com/randomizedtesting/randomizedtesting/releases/tag/release%2F2.3.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7174) Upgrade randomizedtesting to 2.3.4

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232451#comment-15232451
 ] 

ASF subversion and git services commented on LUCENE-7174:
-

Commit ddc02603a7fc62695ff0985b56c82e45c1fc5049 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ddc0260 ]

LUCENE-7174: IntelliJ config: remove trailing slashes on dir names to make 
IntelliJ happy


> Upgrade randomizedtesting to 2.3.4
> --
>
> Key: LUCENE-7174
> URL: https://issues.apache.org/jira/browse/LUCENE-7174
> Project: Lucene - Core
>  Issue Type: Test
>  Components: general/test
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: master, 6.1, 6.x
>
>
> The new version has better output of static leak detector, so you are able to 
> figure out which field caused the InaccessibleObjectException or 
> AccessControlException.
> https://github.com/randomizedtesting/randomizedtesting/releases/tag/release%2F2.3.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7174) Upgrade randomizedtesting to 2.3.4

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232441#comment-15232441
 ] 

ASF subversion and git services commented on LUCENE-7174:
-

Commit 0cf6c551191432f9573122d3838dc58b59776ff1 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0cf6c55 ]

LUCENE-7174: IntelliJ config: switch JUnit library to include all jars under 
lucene/test-framework/lib/, rather than using the exact jar names, which is 
brittle, and causes trouble when people forget to update when jars are upgraded 
(like on this issue)


> Upgrade randomizedtesting to 2.3.4
> --
>
> Key: LUCENE-7174
> URL: https://issues.apache.org/jira/browse/LUCENE-7174
> Project: Lucene - Core
>  Issue Type: Test
>  Components: general/test
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: master, 6.1, 6.x
>
>
> The new version has better output of static leak detector, so you are able to 
> figure out which field caused the InaccessibleObjectException or 
> AccessControlException.
> https://github.com/randomizedtesting/randomizedtesting/releases/tag/release%2F2.3.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7174) Upgrade randomizedtesting to 2.3.4

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232442#comment-15232442
 ] 

ASF subversion and git services commented on LUCENE-7174:
-

Commit 65bfc19f98b83212ced82a7efed66ec7d706050a in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=65bfc19 ]

LUCENE-7174: IntelliJ config: remove trailing slashes on dir names to make 
IntelliJ happy


> Upgrade randomizedtesting to 2.3.4
> --
>
> Key: LUCENE-7174
> URL: https://issues.apache.org/jira/browse/LUCENE-7174
> Project: Lucene - Core
>  Issue Type: Test
>  Components: general/test
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: master, 6.1, 6.x
>
>
> The new version has better output of static leak detector, so you are able to 
> figure out which field caused the InaccessibleObjectException or 
> AccessControlException.
> https://github.com/randomizedtesting/randomizedtesting/releases/tag/release%2F2.3.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7729) ConcurrentUpdateSolrClient ignoring the collection parameter in some methods

2016-04-08 Thread Nicolas Gavalda (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232406#comment-15232406
 ] 

Nicolas Gavalda commented on SOLR-7729:
---

IMHO the point of this API is to have a unique SolrClient variable which can be 
used to query multiple collections. It would be bothersome to have to declare a 
new SolrClient for each new collection to query.

I had already updated the patch to trunk in the pull request I submitted two 
weeks ago, I hope you didn't bother too much with the update :)

Once the fix is committed to master, could it be merged in 6.x and maybe in 5.x?

> ConcurrentUpdateSolrClient ignoring the collection parameter in some methods
> 
>
> Key: SOLR-7729
> URL: https://issues.apache.org/jira/browse/SOLR-7729
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.1
>Reporter: Jorge Luis Betancourt Gonzalez
>Assignee: Mark Miller
>  Labels: client, solrj
> Attachments: SOLR-7729-ConcurrentUpdateSolrClient-collection.patch, 
> SOLR-7729.patch
>
>
> Some of the methods in {{ConcurrentUpdateSolrClient}} accept an aditional 
> {{collection}} parameter, some of this methods are: {{add(String collection, 
> SolrInputDocument doc)}} and {{request(SolrRequest, String collection)}}. 
> This collection parameter is being ignored in this cases but works for others 
> like {{commit(String collection)}}.
> [~elyograg] noted that:
> {quote} 
> Looking into how an update request actually gets added to the background
> queue in ConcurrentUpdateSolrClient, it appears that the "collection"
> information is ignored before the request is added to the queue.
> {quote}
> From the source, when a commit is issued or the 
> {{UpdateParams.WAIT_SEARCHER}} is set in the request params the collection 
> parameter is used, otherwise the request {{UpdateRequest req}} is queued 
> without any regarding of the collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+112-patched) - Build # 371 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/371/
Java: 32bit/jdk-9-ea+112-patched -client -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString

Error Message:
expected:<...at=0.772208221547936[6], lon=0.135607475210...> but 
was:<...at=0.772208221547936[7], lon=0.135607475210...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...at=0.772208221547936[6], 
lon=0.135607475210...> but was:<...at=0.772208221547936[7], 
lon=0.135607475210...>
at 
__randomizedtesting.SeedInfo.seed([73F731F467497C50:5A61EE0245BBE4DC]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString(TestGeo3DPoint.java:802)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 8998 lines...]
   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testShapeQueryToString -Dtests.seed=73F731F467497C50 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en-SX 
-Dtests.timezone=Indian/Christmas -Dtests.asserts=true 

[jira] [Updated] (SOLR-7729) ConcurrentUpdateSolrClient ignoring the collection parameter in some methods

2016-04-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7729:
--
Attachment: SOLR-7729.patch

Here is a patch that updates this to trunk.

This is all kind of awkward because the user can easily already have the 
collection in the base url the client uses, but what can you do. This at least 
fixes the bug.

> ConcurrentUpdateSolrClient ignoring the collection parameter in some methods
> 
>
> Key: SOLR-7729
> URL: https://issues.apache.org/jira/browse/SOLR-7729
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.1
>Reporter: Jorge Luis Betancourt Gonzalez
>Assignee: Mark Miller
>  Labels: client, solrj
> Attachments: SOLR-7729-ConcurrentUpdateSolrClient-collection.patch, 
> SOLR-7729.patch
>
>
> Some of the methods in {{ConcurrentUpdateSolrClient}} accept an aditional 
> {{collection}} parameter, some of this methods are: {{add(String collection, 
> SolrInputDocument doc)}} and {{request(SolrRequest, String collection)}}. 
> This collection parameter is being ignored in this cases but works for others 
> like {{commit(String collection)}}.
> [~elyograg] noted that:
> {quote} 
> Looking into how an update request actually gets added to the background
> queue in ConcurrentUpdateSolrClient, it appears that the "collection"
> information is ignored before the request is added to the queue.
> {quote}
> From the source, when a commit is issued or the 
> {{UpdateParams.WAIT_SEARCHER}} is set in the request params the collection 
> parameter is used, otherwise the request {{UpdateRequest req}} is queued 
> without any regarding of the collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8955) ReplicationHandler should throttle across all requests instead of for each client

2016-04-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232298#comment-15232298
 ] 

Mark Miller commented on SOLR-8955:
---

+1

> ReplicationHandler should throttle across all requests instead of for each 
> client
> -
>
> Key: SOLR-8955
> URL: https://issues.apache.org/jira/browse/SOLR-8955
> Project: Solr
>  Issue Type: Improvement
>  Components: replication (java), SolrCloud
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-medium, newdev
> Fix For: master, 6.1
>
>
> SOLR-6485 added the ability to throttle the speed of replication but the 
> implementation rate limits each request. So e.g. the maxWriteMBPerSec is 1 
> and 5 slaves request full replication then the effective transfer rate from 
> the master is 5 MB/second which is not what is often desired.
> I propose to make the rate limit global (across all replication requests) 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7195) GeoPolygon construction sometimes inexplicably chooses concave polygons when order of points is clockwise

2016-04-08 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7195:

Attachment: LUCENE-7195.patch

I turned this into a test, which unfortunately does *not* fail.  Attached.


> GeoPolygon construction sometimes inexplicably chooses concave polygons when 
> order of points is clockwise
> -
>
> Key: LUCENE-7195
> URL: https://issues.apache.org/jira/browse/LUCENE-7195
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7195.patch
>
>
> The following input generates the following polygon, which is backwards from 
> the correct sense:
> {code}
> MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
> radiusMeters=44832.90297079173 gons=10
>   polyLats=[51.20438285996, 50.89947531437482, 
> 50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
> 51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
> 51.20438285996]
>   polyLons=[0.8747711978759765, 0.6509219832137298, 0.35960265165247807, 
> 0.10290284834752167, -0.18841648321373008, -0.41226569787597667, 
> -0.18960465285650027, 0.10285893781346236, 0.35964656218653757, 
> 0.6521101528565002, 0.8747711978759765]
>  --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
> GeoCompositeMembershipShape: {[GeoConcavePolygon:
> {planetmodel=PlanetModel.WGS84, points=
> [[lat=0.899021779599662, lon=0.011381469253029434],
>  [lat=0.9005818372758149, lon=0.006277016653633617],
>  [lat=0.9005818372758149, lon=0.0017952271299490152],
>  [lat=0.899021779599662, lon=-0.003309225469446801],
>  [lat=0.8936850723587506, lon=-0.007195393820967987],
>  [lat=0.8883634317734164, lon=-0.0032884879971082164],
>  [lat=0.8867906661272461, lon=0.0017959935133446592],
>  [lat=0.8867906661272461, lon=0.006276250270237971],
>  [lat=0.8883634317734164, lon=0.011360731780690846],
>  [lat=0.8936850723587506, lon=0.015267637604550617]], internalEdges={}, 
> holes=[]}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: Add Points format to o.a.l.codec package description

2016-04-08 Thread Michael McCandless
Woops, thanks Steve!

Mike McCandless

http://blog.mikemccandless.com

On Mon, Apr 4, 2016 at 4:53 PM,  wrote:

> Repository: lucene-solr
> Updated Branches:
>   refs/heads/master 9bef6c000 -> deefaf1ad
>
>
> Add Points format to o.a.l.codec package description
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/deefaf1a
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/deefaf1a
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/deefaf1a
>
> Branch: refs/heads/master
> Commit: deefaf1ad9672b034f263a9fd492f9abeecf5061
> Parents: 9bef6c0
> Author: Steve Rowe 
> Authored: Mon Apr 4 16:52:51 2016 -0400
> Committer: Steve Rowe 
> Committed: Mon Apr 4 16:53:12 2016 -0400
>
> --
>  lucene/core/src/java/org/apache/lucene/codecs/package-info.java | 1 +
>  1 file changed, 1 insertion(+)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/deefaf1a/lucene/core/src/java/org/apache/lucene/codecs/package-info.java
> --
> diff --git
> a/lucene/core/src/java/org/apache/lucene/codecs/package-info.java
> b/lucene/core/src/java/org/apache/lucene/codecs/package-info.java
> index 28b260d..0c69886 100644
> --- a/lucene/core/src/java/org/apache/lucene/codecs/package-info.java
> +++ b/lucene/core/src/java/org/apache/lucene/codecs/package-info.java
> @@ -25,6 +25,7 @@
>   *   DocValues - see {@link
> org.apache.lucene.codecs.DocValuesFormat}
>   *   Stored fields - see {@link
> org.apache.lucene.codecs.StoredFieldsFormat}
>   *   Term vectors - see {@link
> org.apache.lucene.codecs.TermVectorsFormat}
> + *   Points - see {@link org.apache.lucene.codecs.PointsFormat}
>   *   FieldInfos - see {@link
> org.apache.lucene.codecs.FieldInfosFormat}
>   *   SegmentInfo - see {@link
> org.apache.lucene.codecs.SegmentInfoFormat}
>   *   Norms - see {@link org.apache.lucene.codecs.NormsFormat}
>
>


[jira] [Assigned] (LUCENE-7196) DataSplitter should be providing class centric doc sets in all generated indexes

2016-04-08 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili reassigned LUCENE-7196:
---

Assignee: Tommaso Teofili

> DataSplitter should be providing class centric doc sets in all generated 
> indexes
> 
>
> Key: LUCENE-7196
> URL: https://issues.apache.org/jira/browse/LUCENE-7196
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Minor
> Fix For: 6.1
>
>
> {{DataSplitter}} currently creates 3 indexes (train/test/cv) out of an 
> _original_ index for evaluation of {{Classifiers}} however "class coverage" 
> in such generated indexes is not guaranteed; that means e.g. in _training 
> index_ only documents belonging to 50% of the class set could be indexed and 
> hence classifiers may not be very effective. In order to provide more 
> consistent evaluation the generated index should contain _ split-ratio * | 
> docs in c |_ documents for each class _c_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8913) When using a shared filesystem we should store data dir and tlog dir locations in the clusterstate.

2016-04-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232284#comment-15232284
 ] 

Mark Miller commented on SOLR-8913:
---

So most of this change is just always publishing data dirs when on a shared fs 
rather now, which only does it for auto add replicas (which requires a shared 
fs). Then probably adding a test. What I have been wrestling with is whether I 
also wanted to pull in this override the data dirs option I currently need with 
SOLR-6237. It's part of the dance that is currently necessary for all the 
replicas to agree on a shared directory.  Kind of leaning against at the 
moment, it probably still mostly fits in SOLR-6237, but still playing around a 
bit to decide.

> When using a shared filesystem we should store data dir and tlog dir 
> locations in the clusterstate.
> ---
>
> Key: SOLR-8913
> URL: https://issues.apache.org/jira/browse/SOLR-8913
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>
> Spinning this out of SOLR-6237. I'll put up an initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8913) When using a shared filesystem we should store data dir and tlog dir locations in the clusterstate.

2016-04-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8913:
-

Assignee: Mark Miller

> When using a shared filesystem we should store data dir and tlog dir 
> locations in the clusterstate.
> ---
>
> Key: SOLR-8913
> URL: https://issues.apache.org/jira/browse/SOLR-8913
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> Spinning this out of SOLR-6237. I'll put up an initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7196) DataSplitter should be providing class centric doc sets in all generated indexes

2016-04-08 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-7196:

Priority: Minor  (was: Major)

> DataSplitter should be providing class centric doc sets in all generated 
> indexes
> 
>
> Key: LUCENE-7196
> URL: https://issues.apache.org/jira/browse/LUCENE-7196
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Priority: Minor
> Fix For: 6.1
>
>
> {{DataSplitter}} currently creates 3 indexes (train/test/cv) out of an 
> _original_ index for evaluation of {{Classifiers}} however "class coverage" 
> in such generated indexes is not guaranteed; that means e.g. in _training 
> index_ only documents belonging to 50% of the class set could be indexed and 
> hence classifiers may not be very effective. In order to provide more 
> consistent evaluation the generated index should contain _ split-ratio * | 
> docs in c |_ documents for each class _c_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7196) DataSplitter should be providing class centric doc sets in all generated indexes

2016-04-08 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-7196:
---

 Summary: DataSplitter should be providing class centric doc sets 
in all generated indexes
 Key: LUCENE-7196
 URL: https://issues.apache.org/jira/browse/LUCENE-7196
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
 Fix For: 6.1


{{DataSplitter}} currently creates 3 indexes (train/test/cv) out of an 
_original_ index for evaluation of {{Classifiers}} however "class coverage" in 
such generated indexes is not guaranteed; that means e.g. in _training index_ 
only documents belonging to 50% of the class set could be indexed and hence 
classifiers may not be very effective. In order to provide more consistent 
evaluation the generated index should contain _ split-ratio * | docs in c |_ 
documents for each class _c_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7195) GeoPolygon construction sometimes inexplicably chooses concave polygons when order of points is clockwise

2016-04-08 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7195:

Description: 
The following input generates the following polygon, which is backwards from 
the correct sense:

{code}
MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
radiusMeters=44832.90297079173 gons=10
  polyLats=[51.20438285996, 50.89947531437482, 
50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
51.20438285996]
  polyLons=[0.8747711978759765, 0.6509219832137298, 0.35960265165247807, 
0.10290284834752167, -0.18841648321373008, -0.41226569787597667, 
-0.18960465285650027, 0.10285893781346236, 0.35964656218653757, 
0.6521101528565002, 0.8747711978759765]
 --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
GeoCompositeMembershipShape: {[GeoConcavePolygon:
{planetmodel=PlanetModel.WGS84, points=
[[lat=0.899021779599662, lon=0.011381469253029434],
 [lat=0.9005818372758149, lon=0.006277016653633617],
 [lat=0.9005818372758149, lon=0.0017952271299490152],
 [lat=0.899021779599662, lon=-0.003309225469446801],
 [lat=0.8936850723587506, lon=-0.007195393820967987],
 [lat=0.8883634317734164, lon=-0.0032884879971082164],
 [lat=0.8867906661272461, lon=0.0017959935133446592],
 [lat=0.8867906661272461, lon=0.006276250270237971],
 [lat=0.8883634317734164, lon=0.011360731780690846],
 [lat=0.8936850723587506, lon=0.015267637604550617]], internalEdges={}, 
holes=[]}]}
{code}


  was:
The following input generates the following polygon, which is backwards from 
the correct sense:

{code}
MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
radiusMeters=44832.90297079173 gons=10
  polyLats=[51.20438285996, 50.89947531437482, 
50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
51.20438285996]
  polyLons=[0.8747711978759765, 0.6509219832137298,
0.35960265165247807, 0.10290284834752167, -0.18841648321373008,
-0.41226569787597667, -0.18960465285650027, 0.10285893781346236, 
0.35964656218653757, 0.6521101528565002, 0.8747711978759765]
 --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
GeoCompositeMembershipShape: {[GeoConcavePolygon:
{planetmodel=PlanetModel.WGS84, points=
[[lat=0.899021779599662, lon=0.011381469253029434], [lat=0.9005818372758149, 
lon=0.006277016653633617], [lat=0.9005818372758149, lon=0.0017952271299490152], 
[lat=0.899021779599662, lon=-0.003309225469446801], [lat=0.8936850723587506, 
lon=-0.007195393820967987], [lat=0.8883634317734164, 
lon=-0.0032884879971082164], [lat=0.8867906661272461, 
lon=0.0017959935133446592], [lat=0.8867906661272461, lon=0.006276250270237971], 
[lat=0.8883634317734164, lon=0.011360731780690846], [lat=0.8936850723587506, 
lon=0.015267637604550617]], internalEdges={}, holes=[]}]}
{code}



> GeoPolygon construction sometimes inexplicably chooses concave polygons when 
> order of points is clockwise
> -
>
> Key: LUCENE-7195
> URL: https://issues.apache.org/jira/browse/LUCENE-7195
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> The following input generates the following polygon, which is backwards from 
> the correct sense:
> {code}
> MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
> radiusMeters=44832.90297079173 gons=10
>   polyLats=[51.20438285996, 50.89947531437482, 
> 50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
> 51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
> 51.20438285996]
>   polyLons=[0.8747711978759765, 0.6509219832137298, 0.35960265165247807, 
> 0.10290284834752167, -0.18841648321373008, -0.41226569787597667, 
> -0.18960465285650027, 0.10285893781346236, 0.35964656218653757, 
> 0.6521101528565002, 0.8747711978759765]
>  --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
> GeoCompositeMembershipShape: {[GeoConcavePolygon:
> {planetmodel=PlanetModel.WGS84, points=
> [[lat=0.899021779599662, lon=0.011381469253029434],
>  [lat=0.9005818372758149, lon=0.006277016653633617],
>  [lat=0.9005818372758149, lon=0.0017952271299490152],
>  [lat=0.899021779599662, lon=-0.003309225469446801],
>  [lat=0.8936850723587506, lon=-0.007195393820967987],
>  [lat=0.8883634317734164, lon=-0.0032884879971082164],
>  [lat=0.8867906661272461, lon=0.0017959935133446592],
>  [lat=0.8867906661272461, lon=0.006276250270237971],
>  [lat=0.8883634317734164, lon=0.011360731780690846],
>  [lat=0.8936850723587506, lon=0.015267637604550617]], internalEdges={}, 
> 

[jira] [Created] (LUCENE-7195) GeoPolygon construction sometimes inexplicably chooses concave polygons when order of points is clockwise

2016-04-08 Thread Karl Wright (JIRA)
Karl Wright created LUCENE-7195:
---

 Summary: GeoPolygon construction sometimes inexplicably chooses 
concave polygons when order of points is clockwise
 Key: LUCENE-7195
 URL: https://issues.apache.org/jira/browse/LUCENE-7195
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Affects Versions: master
Reporter: Karl Wright
Assignee: Karl Wright


The following input generates the following polygon, which is backwards from 
the correct sense:

{code}
MAKE POLY: centerLat=51.20438285996 centerLon=0.231252742
radiusMeters=44832.90297079173 gons=10
  polyLats=[51.20438285996, 50.89947531437482, 
50.8093624806861,50.8093624806861, 50.89947531437482, 51.20438285996, 
51.51015366140113, 51.59953838204167, 51.59953838204167, 51.51015366140113, 
51.20438285996]
  polyLons=[0.8747711978759765, 0.6509219832137298,
0.35960265165247807, 0.10290284834752167, -0.18841648321373008,
-0.41226569787597667, -0.18960465285650027, 0.10285893781346236, 
0.35964656218653757, 0.6521101528565002, 0.8747711978759765]
 --> QUERY: PointInGeo3DShapeQuery: field=point: Shape:
GeoCompositeMembershipShape: {[GeoConcavePolygon:
{planetmodel=PlanetModel.WGS84, points=
[[lat=0.899021779599662, lon=0.011381469253029434], [lat=0.9005818372758149, 
lon=0.006277016653633617], [lat=0.9005818372758149, lon=0.0017952271299490152], 
[lat=0.899021779599662, lon=-0.003309225469446801], [lat=0.8936850723587506, 
lon=-0.007195393820967987], [lat=0.8883634317734164, 
lon=-0.0032884879971082164], [lat=0.8867906661272461, 
lon=0.0017959935133446592], [lat=0.8867906661272461, lon=0.006276250270237971], 
[lat=0.8883634317734164, lon=0.011360731780690846], [lat=0.8936850723587506, 
lon=0.015267637604550617]], internalEdges={}, holes=[]}]}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+112-patched) - Build # 16458 - Failure!

2016-04-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16458/
Java: 32bit/jdk-9-ea+112-patched -server -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString

Error Message:
expected:<...at=0.772208221547936[6], lon=0.135607475210...> but 
was:<...at=0.772208221547936[7], lon=0.135607475210...>

Stack Trace:
org.junit.ComparisonFailure: expected:<...at=0.772208221547936[6], 
lon=0.135607475210...> but was:<...at=0.772208221547936[7], 
lon=0.135607475210...>
at 
__randomizedtesting.SeedInfo.seed([AE4180E95038C87D:87D75F1F72CA50F1]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testShapeQueryToString(TestGeo3DPoint.java:802)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)




Build Log:
[...truncated 8992 lines...]
   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testShapeQueryToString -Dtests.seed=AE4180E95038C87D 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=smn 
-Dtests.timezone=America/Juneau -Dtests.asserts=true 

[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-04-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232236#comment-15232236
 ] 

Steve Rowe commented on LUCENE-6993:


Hi Mike, sorry I haven't had the bandwidth to engage on this issue and on JFlex 
recently.  Thanks for the work you've done here and in creating JFlex issues 
for some of the release blocking issues there.

Since Lucene 6.0 will be released shortly, and there is usually a gap of at 
least a month between minor releases, I think it makes sense to delay the 
decision about waiting on JFlex release for a couple weeks.  I hope to be able 
to work on JFlex release blockers this weekend and next week.  If after a 
couple weeks no JFlex release is imminent, I'd say move forward here.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.x
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-04-08 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232225#comment-15232225
 ] 

Mike Drob commented on LUCENE-6993:
---

[~steve_rowe] - I pinged the jflex list about getting the release going, but it 
looks like there are still a few outstanding issues to be resolved on that end. 
Do you think it is still worth waiting on the release there, or should we move 
forward here until jflex catches up and re-engage then?

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.x
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7187) Block join queries' weight impl should implement extractTerms(...)

2016-04-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232218#comment-15232218
 ] 

David Smiley commented on LUCENE-7187:
--

+1

> Block join queries' weight impl should implement extractTerms(...)
> --
>
> Key: LUCENE-7187
> URL: https://issues.apache.org/jira/browse/LUCENE-7187
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE_7187.patch
>
>
> In the case the distribute document frequencies need to be computed for block 
> join queries, the child query is ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8933) SolrDispatchFilter::consumeInput logs "Stream Closed" IOException

2016-04-08 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8933:

Attachment: SOLR-8933.patch

bq. The ContentStream tests are using remote streams (actually a file:// URL 
for testing). So on Windows, tests should fail with a resource leak there.
The cases in {{ContentStreamTest}} are still closing everything. I don't 
understand what the relation there is. I haven't tested with Windows though, so 
maybe there is additional nuance I am missing.

I found the part where we are processing remote stream parameter to set the 
content stream... Maybe it makes sense to make ContentStream implement 
Closeable and then we can know to close the URL and File streams but not the 
Servlet, Byte, or String streams. They're all build from the Request in 
{{SolrRequestParsers::parse}} - at which point we can make that determination.

The next question becomes then, who is responsible for closing the 
ContentStream? It's easy to make the SolrRequest responsible for the 
ContentStreams it creates, but I am not sure if there are other cases. And 
there are too many warnings in the project for me to reliably tell what I'm 
missing.

Here's a proposed patch heading down this path, likely incomplete.

> SolrDispatchFilter::consumeInput logs "Stream Closed" IOException
> -
>
> Key: SOLR-8933
> URL: https://issues.apache.org/jira/browse/SOLR-8933
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, 
> SOLR-8933.patch, SOLR-8933.patch
>
>
> After SOLR-8453 we started seeing some IOExceptions coming out of 
> SolrDispatchFilter with "Stream Closed" messages.
> It looks like we are indeed closing the request stream in several places when 
> we really need to be letting the web container handle their life cycle. I've 
> got a preliminary patch ready and am working on testing it to make sure there 
> are no regressions.
> A very strange piece of this is that I have been entirely unable to reproduce 
> it on a unit test, but have seen it on cluster deployment quite consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7785) Segments API returning java.lang.IllegalStateException

2016-04-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-7785.
--
Resolution: Duplicate

> Segments API returning java.lang.IllegalStateException
> --
>
> Key: SOLR-7785
> URL: https://issues.apache.org/jira/browse/SOLR-7785
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1
>Reporter: Upayavira
>Priority: Minor
>
> A call to this URL: 
> http://localhost:8983/solr/images/admin/segments?_=1436792214599=json
> periodically returns the following exception:
> ERROR - 2015-07-13 12:57:39.962; [   images] 
> org.apache.solr.common.SolrException; null:java.lang.IllegalStateException: 
> file: MMapDirectory@/.some-path/images/data/index 
> lockFactory=org.apache.lucene.store.NativeFSLockFactory@76180de2 appears both 
> in delegate and in cache: cache=[_pst.fdx, _pst.fdt, pending_segments_n82, 
> _pst_Lucene50_0.dvd, _pst.si, _pst_Lucene50_0.doc, _pst_Lucene50_0.tim, 
> _pst.fnm, _pst_Lucene50_0.dvm, _pst_Lucene50_0.tip],delegate=[_9no.fdt, 
> _9no.fdx, _9no.fnm, _9no.si, _9no_Lucene50_0.doc, _9no_Lucene50_0.dvd, 
> _9no_Lucene50_0.dvm, _9no_Lucene50_0.tim, _9no_Lucene50_0.tip, _akr.cfe, 
> _akr.cfs, _akr.si, _bkw.fdt, _bkw.fdx, _bkw.fnm, _bkw.si, 
> _bkw_Lucene50_0.doc, _bkw_Lucene50_0.dvd, _bkw_Lucene50_0.dvm, 
> _bkw_Lucene50_0.tim, _bkw_Lucene50_0.tip, _che.cfe, _che.cfs, _che.si, 
> _dg5.fdt, _dg5.fdx, _dg5.fnm, _dg5.si, _dg5_Lucene50_0.doc, 
> _dg5_Lucene50_0.dvd, _dg5_Lucene50_0.dvm, _dg5_Lucene50_0.tim, 
> _dg5_Lucene50_0.tip, _ebj.cfe, _ebj.cfs, _ebj.si, _f8m.cfe, _f8m.cfs, 
> _f8m.si, _gap.cfe, _gap.cfs, _gap.si, _h5j.cfe, _h5j.cfs, _h5j.si, _iao.cfe, 
> _iao.cfs, _iao.si, _j62.cfe, _j62.cfs, _j62.si, _jlm.cfe, _jlm.cfs, _jlm.si, 
> _kha.fdt, _kha.fdx, _kha_Lucene50_0.doc, _kha_Lucene50_0.tim, 
> _kha_Lucene50_0.tip, _led.cfe, _led.cfs, _led.si, _md3.cfe, _md3.cfs, 
> _md3.si, _n8h.cfe, _n8h.cfs, _n8h.si, _nec.cfe, _nec.cfs, _nec.si, _njm.cfe, 
> _njm.cfs, _njm.si, _o77.cfe, _o77.cfs, _o77.si, _od2.cfe, _od2.cfs, _od2.si, 
> _ot5.cfe, _ot5.cfs, _ot5.si, _ox1.cfe, _ox1.cfs, _ox1.si, _oyz.cfe, _oyz.cfs, 
> _oyz.si, _p4u.cfe, _p4u.cfs, _p4u.si, _pa4.cfe, _pa4.cfs, _pa4.si, _peu.cfe, 
> _peu.cfs, _peu.si, _pj0.cfe, _pj0.cfs, _pj0.si, _pm1.cfe, _pm1.cfs, _pm1.si, 
> _poj.cfe, _poj.cfs, _poj.si, _pqh.cfe, _pqh.cfs, _pqh.si, _psf.cfe, _psf.cfs, 
> _psf.si, _psj.fdt, _psj.fdx, _psj.fnm, _psj.si, _psj_Lucene50_0.doc, 
> _psj_Lucene50_0.dvd, _psj_Lucene50_0.dvm, _psj_Lucene50_0.tim, 
> _psj_Lucene50_0.tip, _psk.fdt, _psk.fdx, _psk.fnm, _psk.si, 
> _psk_Lucene50_0.doc, _psk_Lucene50_0.dvd, _psk_Lucene50_0.dvm, 
> _psk_Lucene50_0.tim, _psk_Lucene50_0.tip, _psp.cfe, _psp.cfs, _psp.si, 
> _psq.fdt, _psq.fdx, _psq.fnm, _psq.si, _psq_Lucene50_0.doc, 
> _psq_Lucene50_0.dvd, _psq_Lucene50_0.dvm, _psq_Lucene50_0.tim, 
> _psq_Lucene50_0.tip, _psr.fdt, _psr.fdx, _psr.fnm, _psr.si, 
> _psr_Lucene50_0.doc, _psr_Lucene50_0.dvd, _psr_Lucene50_0.dvm, 
> _psr_Lucene50_0.tim, _psr_Lucene50_0.tip, _pss.fdt, _pss.fdx, _pss.fnm, 
> _pss.si, _pss_Lucene50_0.doc, _pss_Lucene50_0.dvd, _pss_Lucene50_0.dvm, 
> _pss_Lucene50_0.tim, _pss_Lucene50_0.tip, pending_segments_n82, segments_n81, 
> write.lock]
> at 
> org.apache.lucene.store.NRTCachingDirectory.listAll(NRTCachingDirectory.java:103)
> at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:641)
> at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:612)
> at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:442)
> at 
> org.apache.solr.handler.admin.SegmentsInfoRequestHandler.getSegmentsInfo(SegmentsInfoRequestHandler.java:60)
> at 
> org.apache.solr.handler.admin.SegmentsInfoRequestHandler.handleRequestBody(SegmentsInfoRequestHandler.java:52)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> 

[jira] [Resolved] (LUCENE-7192) Geo3d polygon creation should not get upset about co-linear points

2016-04-08 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7192.
-
   Resolution: Fixed
Fix Version/s: 6.x
   master

> Geo3d polygon creation should not get upset about co-linear points
> --
>
> Key: LUCENE-7192
> URL: https://issues.apache.org/jira/browse/LUCENE-7192
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
>
> Currently, if you create a polygon with co-linear adjacent points, the 
> polygon fails to create (you get IllegalArgumentException).  We should make 
> this more robust.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7148) Support boolean subset matching

2016-04-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232175#comment-15232175
 ] 

David Smiley commented on LUCENE-7148:
--

Yeah, I suggest closing.  I guess this is the kind of thing better done higher 
up the stack (Solr/ES/...) since there isn't a real lack of anything from 
Lucene, it's more of a matter of which existing Lucene parts do you stitch 
together for multiple possible implementations.

> Support boolean subset matching
> ---
>
> Key: LUCENE-7148
> URL: https://issues.apache.org/jira/browse/LUCENE-7148
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/search
>Affects Versions: 5.x
>Reporter: Otmar Caduff
>  Labels: newbie
>
> In Lucene, I know of the possibility of Occur.SHOULD, Occur.MUST and the 
> “minimum should match” setting on the boolean query.
> Now, when querying, I want to
> - (1)  match the documents which either contain all the terms of the query 
> (Occur.MUST for all terms would do that) or,
> - (2)  if all terms for a given field of a document are a subset of the query 
> terms, that document should match as well.
> Example:
> Document d hast field f with terms A, B, C
> Query with the following terms should match that document:
> A
> B
> A B
> A B C
> A B C D
> Query with the following terms should not match:
> D
> A B D



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7192) Geo3d polygon creation should not get upset about co-linear points

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232160#comment-15232160
 ] 

ASF subversion and git services commented on LUCENE-7192:
-

Commit 3304f524d70ea007c94f3d3413e248fc8790faa9 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3304f52 ]

LUCENE-7192: Permit adjacent points in a polygon to be coplanar.


> Geo3d polygon creation should not get upset about co-linear points
> --
>
> Key: LUCENE-7192
> URL: https://issues.apache.org/jira/browse/LUCENE-7192
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Currently, if you create a polygon with co-linear adjacent points, the 
> polygon fails to create (you get IllegalArgumentException).  We should make 
> this more robust.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7192) Geo3d polygon creation should not get upset about co-linear points

2016-04-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232157#comment-15232157
 ] 

ASF subversion and git services commented on LUCENE-7192:
-

Commit 771680cfd0e474996db5de86a7a0808df84a1ebf in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=771680c ]

LUCENE-7192: Permit adjacent points in a polygon to be coplanar.


> Geo3d polygon creation should not get upset about co-linear points
> --
>
> Key: LUCENE-7192
> URL: https://issues.apache.org/jira/browse/LUCENE-7192
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Currently, if you create a polygon with co-linear adjacent points, the 
> polygon fails to create (you get IllegalArgumentException).  We should make 
> this more robust.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7186) Add numerically stable morton encoding to GeoEncodingUtils

2016-04-08 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-7186:
---
Attachment: LUCENE-7186.patch

Updated patch:

* adds {{mortonEncode}} and {{mortonEncodeCeil}}
* updates stability testing to be consistent with other encoding tests

> Add numerically stable morton encoding to GeoEncodingUtils
> --
>
> Key: LUCENE-7186
> URL: https://issues.apache.org/jira/browse/LUCENE-7186
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-7186.patch, LUCENE-7186.patch
>
>
> This is the follow on to LUCENE-7184.  It adds a numerically stable morton 
> encoding method to {{o.a.l.geo.GeoEncodingUtils}} that can be reused by 
> {{GeoPointField}} and a new {{GeoPointField}} based on {{Point}} encoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8933) SolrDispatchFilter::consumeInput logs "Stream Closed" IOException

2016-04-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232084#comment-15232084
 ] 

Mark Miller commented on SOLR-8933:
---

bq. You don't close ContentStream's InputStream anymore.

That sounds like perhaps ContentStream should be closed and understand more 
about the lifecycle of the inputstream it's using so that only streams it owns 
are closed.

It really seems strange to count on this Jetty impl detail and close streams we 
don't own.

> SolrDispatchFilter::consumeInput logs "Stream Closed" IOException
> -
>
> Key: SOLR-8933
> URL: https://issues.apache.org/jira/browse/SOLR-8933
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, 
> SOLR-8933.patch
>
>
> After SOLR-8453 we started seeing some IOExceptions coming out of 
> SolrDispatchFilter with "Stream Closed" messages.
> It looks like we are indeed closing the request stream in several places when 
> we really need to be letting the web container handle their life cycle. I've 
> got a preliminary patch ready and am working on testing it to make sure there 
> are no regressions.
> A very strange piece of this is that I have been entirely unable to reproduce 
> it on a unit test, but have seen it on cluster deployment quite consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8933) SolrDispatchFilter::consumeInput logs "Stream Closed" IOException

2016-04-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232076#comment-15232076
 ] 

Mark Miller commented on SOLR-8933:
---

bq. To me it is unnatural to not close a stream after you have consumed it.

I don't like closing it at all. I think in java it is natural - you close the 
streams you create, not the open streams that are passed to you.

> SolrDispatchFilter::consumeInput logs "Stream Closed" IOException
> -
>
> Key: SOLR-8933
> URL: https://issues.apache.org/jira/browse/SOLR-8933
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Mike Drob
>Assignee: Mark Miller
> Attachments: SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, 
> SOLR-8933.patch
>
>
> After SOLR-8453 we started seeing some IOExceptions coming out of 
> SolrDispatchFilter with "Stream Closed" messages.
> It looks like we are indeed closing the request stream in several places when 
> we really need to be letting the web container handle their life cycle. I've 
> got a preliminary patch ready and am working on testing it to make sure there 
> are no regressions.
> A very strange piece of this is that I have been entirely unable to reproduce 
> it on a unit test, but have seen it on cluster deployment quite consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >