Github user elbamos commented on the pull request:

    https://github.com/apache/incubator-zeppelin/pull/208#issuecomment-173444542
  
    Sounds right, except the existing PR should fail on spark 1.6 anyway. The 
version currently on my repo as 0.5.6 is up to date with 0.5.6 release.
    
    > On Jan 20, 2016, at 9:13 PM, sourav-mazumder <notificati...@github.com> 
wrote:
    > 
    > Hi Moon,
    > 
    > From the exception looks like it is trying to test Spark in a standalone
    > mode and it is not able to register the master process.
    > 
    > Ideally the standalone cluster (with its master and slave) should be
    > created before hand. I tested that way and it always worked that way.
    > 
    > I'm assuming this is output from a testing done by CI/automated testing.
    > Most probably there may be a script missing which is supposed to start the
    > Spark standalone cluster before hand.
    > 
    > Hope this helps.
    > 
    > Regards,
    > Sourav
    > 
    > On Wed, Jan 20, 2016 at 5:33 PM, Lee moon soo <notificati...@github.com>
    > wrote:
    > 
    > > I've made some progress on testing CI,
    > > at branch
    > > https://github.com/Leemoonsoo/incubator-zeppelin/tree/rinterpreter_jan
    > > at that passes one testing profile (spark-1.6) on CI, but fails on the
    > > other profiles (spark-1.5, spark-1.4 ...)
    > > https://travis-ci.org/Leemoonsoo/incubator-zeppelin/builds/103512174
    > >
    > > Any people familiar with following exception?
    > >
    > > 16/01/20 02:52:10 INFO SparkContext: Running Spark version 1.5.2
    > > 16/01/20 02:52:10 INFO SecurityManager: Changing view acls to: travis
    > > 16/01/20 02:52:10 INFO SecurityManager: Changing modify acls to: travis
    > > 16/01/20 02:52:10 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(travis); users 
with modify permissions: Set(travis)
    > > 16/01/20 02:52:11 INFO Slf4jLogger: Slf4jLogger started
    > > 16/01/20 02:52:11 INFO Remoting: Starting remoting
    > > 16/01/20 02:52:11 INFO Remoting: Remoting started; listening on 
addresses :[akka.tcp://sparkDriver@10.240.0.163:47557]
    > > 16/01/20 02:52:11 INFO Utils: Successfully started service 
'sparkDriver' on port 47557.
    > > 16/01/20 02:52:11 INFO SparkEnv: Registering MapOutputTracker
    > > 16/01/20 02:52:11 INFO SparkEnv: Registering BlockManagerMaster
    > > 16/01/20 02:52:11 INFO DiskBlockManager: Created local directory at 
/tmp/blockmgr-d5f6db46-c354-457f-ac04-514cb750f5f7
    > > 16/01/20 02:52:11 INFO MemoryStore: MemoryStore started with capacity 
530.3 MB
    > > 16/01/20 02:52:11 INFO HttpFileServer: HTTP File server directory is 
/tmp/spark-70746e88-64b6-4e5f-a89a-0acc469f220c/httpd-ee337ade-6d23-4fcb-ac42-1e649e11c626
    > > 16/01/20 02:52:11 INFO HttpServer: Starting HTTP Server
    > > 16/01/20 02:52:11 INFO Utils: Successfully started service 'HTTP file 
server' on port 42899.
    > > 16/01/20 02:52:11 INFO SparkEnv: Registering OutputCommitCoordinator
    > > 16/01/20 02:52:11 INFO Utils: Successfully started service 'SparkUI' on 
port 4040.
    > > 16/01/20 02:52:11 INFO SparkUI: Started SparkUI at 
http://10.240.0.163:4040
    > > 16/01/20 02:52:11 INFO SparkContext: Added JAR 
file:/home/travis/build/Leemoonsoo/incubator-zeppelin/interpreter/spark/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar
 at http://10.240.0.163:42899/jars/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar 
with timestamp 1453258331807
    > > 16/01/20 02:52:11 INFO FairSchedulableBuilder: Created default pool 
default, schedulingMode: FIFO, minShare: 0, weight: 1
    > > 16/01/20 02:52:11 WARN MetricsSystem: Using default name DAGScheduler 
for source because spark.app.id is not set.
    > > 16/01/20 02:52:11 INFO AppClient$ClientEndpoint: Connecting to master 
spark://testing-gce-b9b06e11-646b-46d3-9860-831cc900b4f7.c.eco-emissary-99515.internal:7071...
    > > 16/01/20 02:52:31 ERROR SparkUncaughtExceptionHandler: Uncaught 
exception in thread Thread[appclient-registration-retry-thread,5,main]
    > > java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@12241de8 rejected from 
java.util.concurrent.ThreadPoolExecutor@12773a6[Running, pool size = 1, active 
threads = 0, queued tasks = 0, completed tasks = 1]
    > > at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    > > at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    > > at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    > > at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
    > > at 
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:96)
    > > at 
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:95)
    > > at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    > > at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    > > at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    > > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    > > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    > > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
    > > at 
org.apache.spark.deploy.client.AppClient$ClientEndpoint.tryRegisterAllMasters(AppClient.scala:95)
    > > at 
org.apache.spark.deploy.client.AppClient$ClientEndpoint.org$apache$spark$deploy$client$AppClient$ClientEndpoint$$registerWithMaster(AppClient.scala:121)
    > > at 
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:132)
    > > at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
    > > at 
org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:124)
    > > at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    > > at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    > > at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    > > at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    > > at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    > > at java.lang.Thread.run(Thread.java:745)
    > >
    > > —
    > > Reply to this email directly or view it on GitHub
    > > 
<https://github.com/apache/incubator-zeppelin/pull/208#issuecomment-173423103>
    > > .
    > >
    > —
    > Reply to this email directly or view it on GitHub.
    > 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to