See <https://builds.apache.org/job/Geode-spark-connector/79/changes>

Changes:

[gzhou] GEODE-1894: there's a race that AckReader thred is reading for ack

------------------------------------------
[...truncated 1884 lines...]
16/09/24 15:56:33 INFO HttpFileServer: HTTP File server directory is 
/tmp/spark-818af6ab-9026-44de-a5ea-103aa3a0b9ed/httpd-4eb1d544-0563-49c9-85b6-38809df75dc3
16/09/24 15:56:33 INFO HttpServer: Starting HTTP Server
16/09/24 15:56:33 INFO Utils: Successfully started service 'HTTP file server' 
on port 53100.
16/09/24 15:56:33 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/24 15:56:38 WARN Utils: Service 'SparkUI' could not bind on port 4040. 
Attempting port 4041.
16/09/24 15:56:43 INFO Utils: Successfully started service 'SparkUI' on port 
4041.
16/09/24 15:56:43 INFO SparkUI: Started SparkUI at http://localhost:4041
16/09/24 15:56:43 INFO Executor: Starting executor ID <driver> on host localhost
16/09/24 15:56:43 INFO AkkaUtils: Connecting to HeartbeatReceiver: 
akka.tcp://sparkDriver@localhost:47343/user/HeartbeatReceiver
16/09/24 15:56:43 INFO NettyBlockTransferService: Server created on 36030
16/09/24 15:56:43 INFO BlockManagerMaster: Trying to register BlockManager
16/09/24 15:56:43 INFO BlockManagerMasterActor: Registering block manager 
localhost:36030 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 36030)
16/09/24 15:56:43 INFO BlockManagerMaster: Registered BlockManager
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
[info] RetrieveRegionIntegrationTest:
......

=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 26558.
=== GeodeRunner: starting locator on port 21281
=== GeodeRunner: waiting for locator on port 21281
....=== GeodeRunner: done waiting for locator on port 21281
=== GeodeRunner: starting server1 with clientPort 21702
=== GeodeRunner: starting server2 with clientPort 21557
=== GeodeRunner: starting server3 with clientPort 20123
=== GeodeRunner: starting server4 with clientPort 22028
...
........................................Locator in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator
 on hemera.apache.org[21281] as locator is currently online.
Process ID: 9610
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false 
-Dgemfire.jmx-manager-http-port=22484 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

.....................
Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
 on hemera.apache.org[21557] as server2 is currently online.
Process ID: 9932
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[21281] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar



Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
 on hemera.apache.org[21702] as server1 is currently online.
Process ID: 9972
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[21281] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3
 on hemera.apache.org[20123] as server3 is currently online.
Process ID: 10028
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3/server3.log
JVM Arguments: -Dgemfire.locators=localhost[21281] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4
 on hemera.apache.org[22028] as server4 is currently online.
Process ID: 9906
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4/server4.log
JVM Arguments: -Dgemfire.locators=localhost[21281] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(21702, 21557, 20123, 22028).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/24 15:57:10 WARN SparkContext: Another SparkContext is being constructed 
(or threw an exception in its constructor).  This may indicate an error, since 
only one SparkContext may be running in this JVM (see SPARK-2243). The other 
SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a 
suite with class name: 
ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest *** 
ABORTED ***
[info]   org.apache.spark.SparkException: Only one 
SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, 
set spark.driver.allowMultipleContexts = true. The currently running 
SparkContext was created at:
[info] 
org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] 
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] 
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] 
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] 
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at 
org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at 
org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at 
org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at 
ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
[info]   ...
[info] BasicIntegrationTest:
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
=== GeodeRunner: stop server 3.
=== GeodeRunner: stop server 4.
............



=== GeodeRunner: stop locator
....
Successfully stop Geode locator at port 21281.
=== GeodeRunner: starting locator on port 20562
=== GeodeRunner: waiting for locator on port 20562
....=== GeodeRunner: done waiting for locator on port 20562
=== GeodeRunner: starting server1 with clientPort 26128
=== GeodeRunner: starting server2 with clientPort 26984
...
....................Locator in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator
 on hemera.apache.org[20562] as locator is currently online.
Process ID: 11019
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false 
-Dgemfire.jmx-manager-http-port=27578 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

......
Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
 on hemera.apache.org[26984] as server2 is currently online.
Process ID: 11222
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[20562] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
 on hemera.apache.org[26128] as server1 is currently online.
Process ID: 11246
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[20562] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(26128, 26984).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/24 15:57:36 WARN SparkContext: Another SparkContext is being constructed 
(or threw an exception in its constructor).  This may indicate an error, since 
only one SparkContext may be running in this JVM (see SPARK-2243). The other 
SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a 
suite with class name: 
ittest.org.apache.geode.spark.connector.BasicIntegrationTest *** ABORTED 
***
[info]   org.apache.spark.SparkException: Only one 
SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, 
set spark.driver.allowMultipleContexts = true. The currently running 
SparkContext was created at:
[info] 
org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] 
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] 
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] 
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] 
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at 
org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at 
org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at 
org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at 
ittest.org.apache.geode.spark.connector.BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58)
[info]   ...
[info] ScalaTest
[info] Run completed in 1 minute, 57 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 1, aborted 3
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 
0, pending 0
[info] *** 3 SUITES ABORTED ***
[error] Error: Total 3, Failed 0, Errors 3, Passed 0
[error] Error during tests:
[error]        
ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
[error]        
ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
[error]        
ittest.org.apache.geode.spark.connector.BasicIntegrationTest
[error] (geode-spark-connector/it:test) 
sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 126 s, completed Sep 24, 2016 3:57:36 
PM
Build step 'Execute shell' marked build as failure
Recording test results
Skipped archiving because build is not successful

Reply via email to