Hi Ashish,

If you are not using Spark on YARN and instead using Spark Standalone, you 
don’t need Spark history server. More on the Web Interfaces is provided in the 
following link. Since are using standalone mode, you should be able to access 
the web UI for the master and workers at ports that Ayan provided in early 
email.

Master: http://<masterip>:8080 
Worker: http://<workerIp>:8081

https://spark.apache.org/docs/latest/monitoring.html 
<https://spark.apache.org/docs/latest/monitoring.html>

If you are using Spark on YARN, spark history server is configured to run on 
port 18080 by default on the server where Spark history server is running.

Guru Medasani
gdm...@gmail.com



> On Jul 8, 2015, at 12:01 AM, Ashish Dutt <ashish.du...@gmail.com> wrote:
> 
> Hello Guru,
> Thank you for your quick response. 
>  This is what i get when I try executing spark-shell <master ip>:port number
> 
> C:\spark-1.4.0\bin>spark-shell <master IP>:18088
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig 
> <http://logging.apache.org/log4j/1.2/faq.html#noconfig> for more info.
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> 15/07/08 11:28:35 INFO SecurityManager: Changing view acls to: Ashish Dutt
> 15/07/08 11:28:35 INFO SecurityManager: Changing modify acls to: Ashish Dutt
> 15/07/08 11:28:35 INFO SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set
> (Ashish Dutt); users with modify permissions: Set(Ashish Dutt)
> 15/07/08 11:28:35 INFO HttpServer: Starting HTTP Server
> 15/07/08 11:28:35 INFO Utils: Successfully started service 'HTTP class 
> server' on port 52767.
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>       /_/
> 
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/07/08 11:28:39 INFO SparkContext: Running Spark version 1.4.0
> 15/07/08 11:28:39 INFO SecurityManager: Changing view acls to: Ashish Dutt
> 15/07/08 11:28:39 INFO SecurityManager: Changing modify acls to: Ashish Dutt
> 15/07/08 11:28:39 INFO SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set
> (Ashish Dutt); users with modify permissions: Set(Ashish Dutt)
> 15/07/08 11:28:40 INFO Slf4jLogger: Slf4jLogger started
> 15/07/08 11:28:40 INFO Remoting: Starting remoting
> 15/07/08 11:28:40 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://sparkDriver@10.228.208.74:52780 
> <http://sparkDriver@10.228.208.74:52780/>]
> 15/07/08 11:28:40 INFO Utils: Successfully started service 'sparkDriver' on 
> port 52780.
> 15/07/08 11:28:40 INFO SparkEnv: Registering MapOutputTracker
> 15/07/08 11:28:40 INFO SparkEnv: Registering BlockManagerMaster
> 15/07/08 11:28:40 INFO DiskBlockManager: Created local directory at 
> C:\Users\Ashish Dutt\AppData\Local\Temp\spark-80c4f1fe-37de-4aef
> -9063-cae29c488382\blockmgr-a967422b-05e8-4fc1-b60b-facc7dbd4414
> 15/07/08 11:28:40 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
> 15/07/08 11:28:40 INFO HttpFileServer: HTTP File server directory is 
> C:\Users\Ashish Dutt\AppData\Local\Temp\spark-80c4f1fe-37de-4ae
> f-9063-cae29c488382\httpd-928f4485-ea08-4749-a478-59708db0fefa
> 15/07/08 11:28:40 INFO HttpServer: Starting HTTP Server
> 15/07/08 11:28:40 INFO Utils: Successfully started service 'HTTP file server' 
> on port 52781.
> 15/07/08 11:28:40 INFO SparkEnv: Registering OutputCommitCoordinator
> 15/07/08 11:28:40 INFO Utils: Successfully started service 'SparkUI' on port 
> 4040.
> 15/07/08 11:28:40 INFO SparkUI: Started SparkUI at http://10.228.208.74:4040 
> <http://10.228.208.74:4040/>
> 15/07/08 11:28:40 INFO Executor: Starting executor ID driver on host localhost
> 15/07/08 11:28:41 INFO Executor: Using REPL class URI: 
> http://10.228.208.74:52767 <http://10.228.208.74:52767/>
> 15/07/08 11:28:41 INFO Utils: Successfully started service 
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52800.
> 
> 15/07/08 11:28:41 INFO NettyBlockTransferService: Server created on 52800
> 15/07/08 11:28:41 INFO BlockManagerMaster: Trying to register BlockManager
> 15/07/08 11:28:41 INFO BlockManagerMasterEndpoint: Registering block manager 
> localhost:52800 with 265.4 MB RAM, BlockManagerId(drive
> r, localhost, 52800)
> 15/07/08 11:28:41 INFO BlockManagerMaster: Registered BlockManager
> 
> 15/07/08 11:28:41 INFO SparkILoop: Created spark context..
> Spark context available as sc.
> 15/07/08 11:28:41 INFO HiveContext: Initializing execution hive, version 
> 0.13.1
> 15/07/08 11:28:42 INFO HiveMetaStore: 0: Opening raw store with implemenation 
> class:org.apache.hadoop.hive.metastore.ObjectStore
> 15/07/08 11:28:42 INFO ObjectStore: ObjectStore, initialize called
> 15/07/08 11:28:42 INFO Persistence: Property datanucleus.cache.level2 unknown 
> - will be ignored
> 15/07/08 11:28:42 INFO Persistence: Property 
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 15/07/08 11:28:42 WARN Connection: BoneCP specified but not present in 
> CLASSPATH (or one of dependencies)
> 15/07/08 11:28:42 WARN Connection: BoneCP specified but not present in 
> CLASSPATH (or one of dependencies)
> 15/07/08 11:28:52 INFO ObjectStore: Setting MetaStore object pin classes with 
> hive.metastore.cache.pinobjtypes="Table,StorageDescrip
> tor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 15/07/08 11:28:52 INFO MetaStoreDirectSql: MySQL check failed, assuming we 
> are not on mysql: Lexical error at line 1, column 5.  Enc
> ountered: "@" (64), after : "".
> 15/07/08 11:28:53 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
> "embedded-only" so do
> es not have its own datastore table.
> 15/07/08 11:28:53 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" 
> so does not
>  have its own datastore table.
> 15/07/08 11:29:00 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
> "embedded-only" so do
> es not have its own datastore table.
> 15/07/08 11:29:00 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" 
> so does not
>  have its own datastore table.
> 15/07/08 11:29:02 INFO ObjectStore: Initialized ObjectStore
> 15/07/08 11:29:02 WARN ObjectStore: Version information not found in 
> metastore. hive.metastore.schema.verification is not enabled so
>  recording the schema version 0.13.1aa
> 15/07/08 11:29:03 INFO HiveMetaStore: Added admin role in metastore
> 15/07/08 11:29:03 INFO HiveMetaStore: Added public role in metastore
> 15/07/08 11:29:03 INFO HiveMetaStore: No user is added in admin role, since 
> config is empty
> 15/07/08 11:29:03 INFO SessionState: No Tez session required at this point. 
> hive.execution.engine=mr.
> 15/07/08 11:29:03 INFO SparkILoop: Created sql context (with Hive support)..
> SQL context available as sqlContext.
> 
> scala>
> 
> And then when I checked CDH UI, I found Spark was configured to be as History 
> Server. I am not using YARN.. It is a stand alone cluster with 4 nodes. I 
> want to connect my laptop to this cluster using sparkR or pyspark
> Please suggest what should I do.
> 
> 
> Thank you, 
> Ashish
> 
> Sincerely,
> Ashish Dutt
> PhD Candidate
> Department of Information Systems
> University of Malaya, Lembah Pantai,
> 50603 Kuala Lumpur, Malaysia
> 
> On Wed, Jul 8, 2015 at 12:51 PM, Guru Medasani <gdm...@gmail.com 
> <mailto:gdm...@gmail.com>> wrote:
> Hi Ashish,
> 
> Are you running Spark-on-YARN on the cluster with an instance of Spark 
> History server?
> 
> Also if you are using Cloudera Manager and using Spark on YARN, spark on yarn 
> service has a link for the history server web UI.
> 
> Can you paste the command and the output you are seeing in the thread?
> 
> Guru Medasani
> gdm...@gmail.com <mailto:gdm...@gmail.com>
> 
> 
> 
> > On Jul 7, 2015, at 10:42 PM, Ashish Dutt <ashish.du...@gmail.com 
> > <mailto:ashish.du...@gmail.com>> wrote:
> >
> > Hi,
> > I have CDH 5.4 installed on a linux server. It has 1 cluster in which spark 
> > is deployed as a history server.
> > I am trying to connect my laptop to the spark history server.
> > When I run spark-shell master ip: port number I get the following output
> > How can I verify that the worker is connected to the master?
> >
> > Thanks,
> > Ashish
> >
> 
> 

Reply via email to