Hey Stuti,

Did you start your standalone Master and Workers? You can do this through
sbin/start-all.sh (see
http://spark.apache.org/docs/latest/spark-standalone.html). Otherwise, I
would recommend launching your application from the command line through
bin/spark-submit. I am not sure if we officially support launching Spark
applications from an IDE, because spark-submit handles very specific cases
of how we set up class paths and JVM memory etc.

-Andrew

2014-12-03 22:05 GMT-08:00 Stuti Awasthi <stutiawas...@hcl.com>:

>  Hi All,
>
> I have a standalone Spark(1.1) cluster on one machine and I have installed
> scala Eclipse IDE (scala 2.10) on my desktop. I am trying to execute a
> spark code to execute over my standalone cluster but getting errors.
>
> Please guide me to resolve this.
>
>
>
> Code:
>
>   val logFile = "<File Path present on desktop>" // Should be some file on
> your system
>
>     val conf = new SparkConf().setAppName("Simple
> Application").setMaster("spark://<IP>:<PORT>").setSparkHome("/home/stuti/Spark/spark-1.1.0-bin-hadoop1");
>
>     val sc = new SparkContext(conf)
>
>
> println(sc.master)
> // Print correct  master
>
>    val logData = sc.textFile(logFile, 2).cache()
>
>
> println(logData.count)
> // throws error
>
>
>
>
>
>
>
> Error :
>
> Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
>
> 14/12/04 11:05:38 INFO SecurityManager: Changing view acls to:
> stutiawasthi,
>
> 14/12/04 11:05:38 INFO SecurityManager: Changing modify acls to:
> stutiawasthi,
>
> 14/12/04 11:05:38 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(stutiawasthi,
> ); users with modify permissions: Set(stutiawasthi, )
>
> 14/12/04 11:05:39 INFO Slf4jLogger: Slf4jLogger started
>
> 14/12/04 11:05:39 INFO Remoting: Starting remoting
>
> 14/12/04 11:05:40 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkDriver@<HOSTNAME_DESKTOP>:62308]
>
> 14/12/04 11:05:40 INFO Remoting: Remoting now listens on addresses:
> [akka.tcp://sparkDriver@<HOSTNAME_DESKTOP>:62308]
>
> 14/12/04 11:05:40 INFO Utils: Successfully started service 'sparkDriver'
> on port 62308.
>
> 14/12/04 11:05:40 INFO SparkEnv: Registering MapOutputTracker
>
> 14/12/04 11:05:40 INFO SparkEnv: Registering BlockManagerMaster
>
> 14/12/04 11:05:40 INFO DiskBlockManager: Created local directory at
> C:\Users\STUTIA~1\AppData\Local\Temp\spark-local-20141204110540-ad60
>
> 14/12/04 11:05:40 INFO Utils: Successfully started service 'Connection
> manager for block manager' on port 62311.
>
> 14/12/04 11:05:40 INFO ConnectionManager: Bound socket to port 62311 with
> id = ConnectionManagerId(<HOSTNAME_DESKTOP>,62311)
>
> 14/12/04 11:05:41 INFO MemoryStore: MemoryStore started with capacity
> 133.6 MB
>
> 14/12/04 11:05:41 INFO BlockManagerMaster: Trying to register BlockManager
>
> 14/12/04 11:05:41 INFO BlockManagerMasterActor: Registering block manager
> <HOSTNAME_DESKTOP>:62311 with 133.6 MB RAM
>
> 14/12/04 11:05:41 INFO BlockManagerMaster: Registered BlockManager
>
> 14/12/04 11:05:41 INFO HttpFileServer: HTTP File server directory is
> C:\Users\STUTIA~1\AppData\Local\Temp\spark-b65e69f4-69b9-4bb2-b41f-67165909e4c7
>
> 14/12/04 11:05:41 INFO HttpServer: Starting HTTP Server
>
> 14/12/04 11:05:41 INFO Utils: Successfully started service 'HTTP file
> server' on port 62312.
>
> 14/12/04 11:05:42 INFO Utils: Successfully started service 'SparkUI' on
> port 4040.
>
> 14/12/04 11:05:42 INFO SparkUI: Started SparkUI at http://
> <HOSTNAME_DESKTOP>:4040
>
> 14/12/04 11:05:43 INFO AppClient$ClientActor: Connecting to master
> spark://10.112.67.80:7077...
>
> 14/12/04 11:05:43 INFO SparkDeploySchedulerBackend: SchedulerBackend is
> ready for scheduling beginning after reached minRegisteredResourcesRatio:
> 0.0
>
> spark://10.112.67.80:7077
>
> 14/12/04 11:05:44 WARN SizeEstimator: Failed to check whether
> UseCompressedOops is set; assuming yes
>
> 14/12/04 11:05:45 INFO MemoryStore: ensureFreeSpace(31447) called with
> curMem=0, maxMem=140142182
>
> 14/12/04 11:05:45 INFO MemoryStore: Block broadcast_0 stored as values in
> memory (estimated size 30.7 KB, free 133.6 MB)
>
> 14/12/04 11:05:45 INFO MemoryStore: ensureFreeSpace(3631) called with
> curMem=31447, maxMem=140142182
>
> 14/12/04 11:05:45 INFO MemoryStore: Block broadcast_0_piece0 stored as
> bytes in memory (estimated size 3.5 KB, free 133.6 MB)
>
> 14/12/04 11:05:45 INFO BlockManagerInfo: Added broadcast_0_piece0 in
> memory on <HOSTNAME_DESKTOP>:62311 (size: 3.5 KB, free: 133.6 MB)
>
> 14/12/04 11:05:45 INFO BlockManagerMaster: Updated info of block
> broadcast_0_piece0
>
> 14/12/04 11:05:45 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 14/12/04 11:05:45 WARN LoadSnappy: Snappy native library not loaded
>
> 14/12/04 11:05:46 INFO FileInputFormat: Total input paths to process : 1
>
> 14/12/04 11:05:46 INFO SparkContext: Starting job: count at Test.scala:15
>
> 14/12/04 11:05:46 INFO DAGScheduler: Got job 0 (count at Test.scala:15)
> with 2 output partitions (allowLocal=false)
>
> 14/12/04 11:05:46 INFO DAGScheduler: Final stage: Stage 0(count at
> Test.scala:15)
>
> 14/12/04 11:05:46 INFO DAGScheduler: Parents of final stage: List()
>
> 14/12/04 11:05:46 INFO DAGScheduler: Missing parents: List()
>
> 14/12/04 11:05:46 INFO DAGScheduler: Submitting Stage 0
> (D:/Workspace/Spark/Test/README MappedRDD[1] at textFile at Test.scala:14),
> which has no missing parents
>
> 14/12/04 11:05:46 INFO MemoryStore: ensureFreeSpace(2408) called with
> curMem=35078, maxMem=140142182
>
> 14/12/04 11:05:46 INFO MemoryStore: Block broadcast_1 stored as values in
> memory (estimated size 2.4 KB, free 133.6 MB)
>
> 14/12/04 11:05:46 INFO MemoryStore: ensureFreeSpace(1541) called with
> curMem=37486, maxMem=140142182
>
> 14/12/04 11:05:46 INFO MemoryStore: Block broadcast_1_piece0 stored as
> bytes in memory (estimated size 1541.0 B, free 133.6 MB)
>
> 14/12/04 11:05:46 INFO BlockManagerInfo: Added broadcast_1_piece0 in
> memory on <HOSTNAME_DESKTOP>:62311 (size: 1541.0 B, free: 133.6 MB)
>
> 14/12/04 11:05:46 INFO BlockManagerMaster: Updated info of block
> broadcast_1_piece0
>
> 14/12/04 11:05:46 INFO DAGScheduler: Submitting 2 missing tasks from Stage
> 0 (D:/Workspace/Spark/Test/README MappedRDD[1] at textFile at Test.scala:14)
>
> 14/12/04 11:05:46 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
>
> 14/12/04 11:06:01 WARN TaskSchedulerImpl: Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory
>
> 14/12/04 11:06:03 INFO AppClient$ClientActor: Connecting to master
> spark://10.112.67.80:7077...
>
> 14/12/04 11:06:16 WARN TaskSchedulerImpl: Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory
>
> 14/12/04 11:06:23 INFO AppClient$ClientActor: Connecting to master
> spark://10.112.67.80:7077...
>
> 14/12/04 11:06:31 WARN TaskSchedulerImpl: Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory
>
>
>
> Thanks
>
> Stuti Awasthi
>
>
>
>
>
> ::DISCLAIMER::
>
> ----------------------------------------------------------------------------------------------------------------------------------------------------
>
> The contents of this e-mail and any attachment(s) are confidential and
> intended for the named recipient(s) only.
> E-mail transmission is not guaranteed to be secure or error-free as
> information could be intercepted, corrupted,
> lost, destroyed, arrive late or incomplete, or may contain viruses in
> transmission. The e mail and its contents
> (with or without referred errors) shall therefore not attach any liability
> on the originator or HCL or its affiliates.
> Views or opinions, if any, presented in this email are solely those of the
> author and may not necessarily reflect the
> views or opinions of HCL or its affiliates. Any form of reproduction,
> dissemination, copying, disclosure, modification,
> distribution and / or publication of this message without the prior
> written consent of authorized representative of
> HCL is strictly prohibited. If you have received this email in error
> please delete it and notify the sender immediately.
> Before opening any email and/or attachments, please check them for viruses
> and other defects.
>
>
> ----------------------------------------------------------------------------------------------------------------------------------------------------
>

Reply via email to