Hi all,

I am new to spark and seem to have hit a common newbie obstacle.

I have a pretty simple setup and job but I am unable to get past this error 
when executing a job:

"TaskSchedulerImpl: Initial job has not accepted any resources; check your 
cluster UI to ensure that workers are registered and have sufficient memory”

I have so far gained a basic understanding of worker/executor/driver memory, 
but have run out of ideas what to try next - maybe someone has a clue.


My setup:

Three node standalone cluster with C* and spark on each node and the Datastax 
C*/Spark connector JAR placed on each node.

On the master I have the slaves configured in conf/slaves and I am using 
sbin/start-all.sh to start the whole cluster.

On each node I have this in conf/spark-defauls.conf

spark.master                    spark://devpeng-db-cassandra-1:7077
spark.eventLog.enabled   true
spark.serializer                 org.apache.spark.serializer.KryoSerializer

spark.executor.extraClassPath  
/opt/spark-cassandra-connector-assembly-1.2.0-alpha1.jar

and this in conf/spart-env.sh

SPARK_WORKER_MEMORY=6g



My App looks like this

object TestApp extends App {
  val conf = new SparkConf(true).set("spark.cassandra.connection.host", 
"devpeng-db-cassandra-1.xxxxxxxx")
  val sc = new SparkContext("spark://devpeng-db-cassandra-1:7077", "testApp", 
conf)
  val rdd = sc.cassandraTable("test", "kv")
  println(“Count: “ + String.valueOf(rdd.count) )
  println(rdd.first)
}

Any kind of idea what to check next would help me at this point, I think.

Jan

Log of the application start:

[info] Loading project definition from 
/Users/jan/projects/gkh/jump/workspace/gkh-spark-example/project
[info] Set current project to csconnect (in build 
file:/Users/jan/projects/gkh/jump/workspace/gkh-spark-example/)
[info] Compiling 1 Scala source to 
/Users/jan/projects/gkh/jump/workspace/gkh-spark-example/target/scala-2.10/classes...
[info] Running jump.TestApp 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/14 10:30:11 INFO SecurityManager: Changing view acls to: jan
15/02/14 10:30:11 INFO SecurityManager: Changing modify acls to: jan
15/02/14 10:30:11 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(jan); users with 
modify permissions: Set(jan)
15/02/14 10:30:11 INFO Slf4jLogger: Slf4jLogger started
15/02/14 10:30:11 INFO Remoting: Starting remoting
15/02/14 10:30:12 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkDriver@xxxxxx:58197]
15/02/14 10:30:12 INFO Utils: Successfully started service 'sparkDriver' on 
port 58197.
15/02/14 10:30:12 INFO SparkEnv: Registering MapOutputTracker
15/02/14 10:30:12 INFO SparkEnv: Registering BlockManagerMaster
15/02/14 10:30:12 INFO DiskBlockManager: Created local directory at 
/var/folders/vr/w3whx92d0356g5nj1p6s59gr0000gn/T/spark-local-20150214103012-5b53
15/02/14 10:30:12 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
2015-02-14 10:30:12.304 java[24999:3b07] Unable to load realm info from 
SCDynamicStore
15/02/14 10:30:12 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
15/02/14 10:30:12 INFO HttpFileServer: HTTP File server directory is 
/var/folders/vr/w3whx92d0356g5nj1p6s59gr0000gn/T/spark-48459a22-c1ff-42d5-8b8e-cc89fe84933d
15/02/14 10:30:12 INFO HttpServer: Starting HTTP Server
15/02/14 10:30:12 INFO Utils: Successfully started service 'HTTP file server' 
on port 58198.
15/02/14 10:30:12 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
15/02/14 10:30:12 INFO SparkUI: Started SparkUI at http://xxxxxx:4040
15/02/14 10:30:12 INFO AppClient$ClientActor: Connecting to master 
spark://devpeng-db-cassandra-1:7077...
15/02/14 10:30:13 INFO SparkDeploySchedulerBackend: Connected to Spark cluster 
with app ID app-20150214103013-0001
15/02/14 10:30:13 INFO AppClient$ClientActor: Executor added: 
app-20150214103013-0001/0 on 
worker-20150214102534-devpeng-db-cassandra-2.devpengxxxx 
(devpeng-db-cassandra-2.devpeng.xxxxx:57563) with 8 cores
15/02/14 10:30:13 INFO SparkDeploySchedulerBackend: Granted executor ID 
app-20150214103013-0001/0 on hostPort devpeng-db-cassandra-2.devpeng.xxxx:57563 
with 8 cores, 512.0 MB RAM
15/02/14 10:30:13 INFO AppClient$ClientActor: Executor added: 
app-20150214103013-0001/1 on 
worker-20150214102534-devpeng-db-cassandra-3.devpeng.xxxx-38773 
(devpeng-db-cassandra-3.devpeng.xxxxxx:38773) with 8 cores
15/02/14 10:30:13 INFO SparkDeploySchedulerBackend: Granted executor ID 
app-20150214103013-0001/1 on hostPort 
devpeng-db-cassandra-3.devpeng.xxxxxe:38773 with 8 cores, 512.0 MB RAM
15/02/14 10:30:13 INFO AppClient$ClientActor: Executor updated: 
app-20150214103013-0001/0 is now LOADING
15/02/14 10:30:13 INFO AppClient$ClientActor: Executor updated: 
app-20150214103013-0001/1 is now LOADING
15/02/14 10:30:13 INFO AppClient$ClientActor: Executor updated: 
app-20150214103013-0001/0 is now RUNNING
15/02/14 10:30:13 INFO AppClient$ClientActor: Executor updated: 
app-20150214103013-0001/1 is now RUNNING
15/02/14 10:30:13 INFO NettyBlockTransferService: Server created on 58200
15/02/14 10:30:13 INFO BlockManagerMaster: Trying to register BlockManager
15/02/14 10:30:13 INFO BlockManagerMasterActor: Registering block manager 
192.168.2.103:58200 with 530.3 MB RAM, BlockManagerId(<driver>, xxxx, 58200)
15/02/14 10:30:13 INFO BlockManagerMaster: Registered BlockManager
15/02/14 10:30:13 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/02/14 10:30:14 INFO Cluster: New Cassandra host 
devpeng-db-cassandra-1.devpeng.gkh-setu.de/xxxx:9042 added
15/02/14 10:30:14 INFO Cluster: New Cassandra host /xxx:9042 added
15/02/14 10:30:14 INFO Cluster: New Cassandra host xxxx:9042 added
15/02/14 10:30:14 INFO CassandraConnector: Connected to Cassandra cluster: 
GKHDevPeng
15/02/14 10:30:14 INFO LocalNodeFirstLoadBalancingPolicy: Adding host xxx (DC1)
15/02/14 10:30:14 INFO LocalNodeFirstLoadBalancingPolicy: Adding host xxx (DC1)
15/02/14 10:30:14 INFO LocalNodeFirstLoadBalancingPolicy: Adding host xxxx (DC1)
15/02/14 10:30:14 INFO LocalNodeFirstLoadBalancingPolicy: Adding host xxxxx 
(DC1)
15/02/14 10:30:14 INFO LocalNodeFirstLoadBalancingPolicy: Adding host xxxxx 
(DC1)
15/02/14 10:30:14 INFO LocalNodeFirstLoadBalancingPolicy: Adding host xxxxx 
(DC1)
15/02/14 10:30:15 INFO CassandraConnector: Disconnected from Cassandra cluster: 
GKHDevPeng
15/02/14 10:30:16 INFO SparkContext: Starting job: count at TestApp.scala:23
15/02/14 10:30:16 INFO DAGScheduler: Got job 0 (count at TestApp.scala:23) with 
3 output partitions (allowLocal=false)
15/02/14 10:30:16 INFO DAGScheduler: Final stage: Stage 0(count at 
TestApp.scala:23)
15/02/14 10:30:16 INFO DAGScheduler: Parents of final stage: List()
15/02/14 10:30:16 INFO DAGScheduler: Missing parents: List()
15/02/14 10:30:16 INFO DAGScheduler: Submitting Stage 0 (CassandraRDD[0] at RDD 
at CassandraRDD.scala:49), which has no missing parents
15/02/14 10:30:16 INFO MemoryStore: ensureFreeSpace(4472) called with curMem=0, 
maxMem=556038881
15/02/14 10:30:16 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 4.4 KB, free 530.3 MB)
15/02/14 10:30:16 INFO MemoryStore: ensureFreeSpace(3082) called with 
curMem=4472, maxMem=556038881
15/02/14 10:30:16 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 3.0 KB, free 530.3 MB)
15/02/14 10:30:16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
xxxxx  (size: 3.0 KB, free: 530.3 MB)
15/02/14 10:30:16 INFO BlockManagerMaster: Updated info of block 
broadcast_0_piece0
15/02/14 10:30:16 INFO SparkContext: Created broadcast 0 from broadcast at 
DAGScheduler.scala:838
15/02/14 10:30:16 INFO DAGScheduler: Submitting 3 missing tasks from Stage 0 
(CassandraRDD[0] at RDD at CassandraRDD.scala:49)
15/02/14 10:30:16 INFO TaskSchedulerImpl: Adding task set 0.0 with 3 tasks
15/02/14 10:30:31 WARN TaskSchedulerImpl: Initial job has not accepted any 
resources; check your cluster UI to ensure that workers are registered and have 
sufficient memory







---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to