Just testing with Spark 1.3, it looks like it sets the proxy correctly to
be the YARN RM host (0101)

15/06/03 10:34:19 INFO yarn.ApplicationMaster: Registered signal handlers
for [TERM, HUP, INT]
15/06/03 10:34:20 INFO yarn.ApplicationMaster: ApplicationAttemptId:
appattempt_1432690361766_0596_000001
15/06/03 10:34:20 INFO spark.SecurityManager: Changing view acls to: nw
15/06/03 10:34:20 INFO spark.SecurityManager: Changing modify acls to: nw
15/06/03 10:34:20 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(nw); users with modify permissions: Set(nw)
15/06/03 10:34:20 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/03 10:34:21 INFO Remoting: Starting remoting
15/06/03 10:34:21 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkYarnAM@qtausc-pphd0137.hadoop.local:43972]
15/06/03 10:34:21 INFO util.Utils: Successfully started service
'sparkYarnAM' on port 43972.
15/06/03 10:34:21 INFO yarn.ApplicationMaster: Waiting for Spark driver to
be reachable.
15/06/03 10:34:21 INFO yarn.ApplicationMaster: Driver now available:
edge-node-77.skynet.hadoop:36387
15/06/03 10:34:21 INFO yarn.ApplicationMaster: Listen to driver:
akka.tcp://sparkDriver@edge-node-77.skynet.hadoop:36387/user/YarnScheduler
*15/06/03 10:34:21 INFO yarn.ApplicationMaster: Add WebUI Filter.
AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS
-> qtausc-pphd0101.hadoop.local, PROXY_URI_BASES ->
http://qtausc-pphd0101.hadoop.local:8088/proxy/application_1432690361766_0596
<http://qtausc-pphd0101.hadoop.local:8088/proxy/application_1432690361766_0596>),/proxy/application_1432690361766_0596)*
15/06/03 10:34:21 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/06/03 10:34:21 INFO yarn.YarnAllocator: Will request 2 executor
containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/06/03 10:34:21 INFO yarn.YarnAllocator: Container request (host: Any,
capability: <memory:1408, vCores:1, disks:0.0>)
15/06/03 10:34:21 INFO yarn.YarnAllocator: Container request (host: Any,
capability: <memory:1408, vCores:1, disks:0.0>)
15/06/03 10:34:21 INFO yarn.ApplicationMaster: Started progress reporter
thread - sleep time : 5000
15/06/03 10:34:21 INFO impl.AMRMClientImpl: Received new token for :
qtausc-pphd0151.hadoop.local:50941
15/06/03 10:34:21 INFO yarn.YarnAllocator: Launching container
container_1432690361766_0596_01_000002 for on host
qtausc-pphd0151.hadoop.local
15/06/03 10:34:21 INFO yarn.YarnAllocator: Launching ExecutorRunnable.
driverUrl: 
akka.tcp://sparkDriver@edge-node-77.skynet.hadoop:36387/user/CoarseGrainedScheduler,
 executorHostname: qtausc-pphd0151.hadoop.local
15/06/03 10:34:21 INFO yarn.YarnAllocator: Received 1 containers from YARN,
launching executors on 1 of them.
15/06/03 10:34:21 INFO yarn.ExecutorRunnable: Starting Executor Container
15/06/03 10:34:21 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
15/06/03 10:34:21 INFO yarn.ExecutorRunnable: Setting up
ContainerLaunchContext
15/06/03 10:34:21 INFO yarn.ExecutorRunnable: Preparing Local resources
15/06/03 10:34:21 INFO yarn.ExecutorRunnable: Prepared Local resources
Map(__spark__.jar -> resource { scheme: "maprfs" port: -1 file:
"/user/nw/.sparkStaging/application_1432690361766_0596/spark-assembly-1.3.1-hadoop2.5.1-mapr-1501.jar"
} size: 130013450 timestamp: 1433291656330 type: FILE visibility: PRIVATE)
15/06/03 10:34:21 INFO yarn.ExecutorRunnable: Setting up executor with
environment: Map(CLASSPATH ->
{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>/opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
SPARK_LOG_URL_STDERR ->
http://qtausc-pphd0151.hadoop.local:8042/node/containerlogs/container_1432690361766_0596_01_000002/nw/stderr?start=0,
SPARK_DIST_CLASSPATH ->
/opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1432690361766_0596,
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 130013450, SPARK_USER -> nw,
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE, SPARK_YARN_MODE -> true,
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1433291656330, SPARK_LOG_URL_STDOUT
->
http://qtausc-pphd0151.hadoop.local:8042/node/containerlogs/container_1432690361766_0596_01_000002/nw/stdout?start=0,
SPARK_YARN_CACHE_FILES ->
maprfs:/user/nw/.sparkStaging/application_1432690361766_0596/spark-assembly-1.3.1-hadoop2.5.1-mapr-1501.jar#__spark__.jar)
15/06/03 10:34:21 INFO yarn.ExecutorRunnable: Setting up executor with
commands: List({{JAVA_HOME}}/bin/java, -server,
-XX:OnOutOfMemoryError='kill %p', -Xms1024m, -Xmx1024m,
-Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=36387',
-Dspark.yarn.app.container.log.dir=<LOG_DIR>,
org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url,
akka.tcp://sparkDriver@edge-node-77.skynet.hadoop:36387/user/CoarseGrainedScheduler,
--executor-id, 1, --hostname, qtausc-pphd0151.hadoop.local, --cores, 1,
--app-id, application_1432690361766_0596, --user-class-path,
file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
15/06/03 10:34:21 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: qtausc-pphd0151.hadoop.local:50941
15/06/03 10:34:26 INFO impl.AMRMClientImpl: Received new token for :
qtausc-pphd0177.hadoop.local:40237
15/06/03 10:34:26 INFO yarn.YarnAllocator: Launching container
container_1432690361766_0596_01_000003 for on host
qtausc-pphd0177.hadoop.local
15/06/03 10:34:26 INFO yarn.YarnAllocator: Launching ExecutorRunnable.
driverUrl: 
akka.tcp://sparkDriver@edge-node-77.skynet.hadoop:36387/user/CoarseGrainedScheduler,
 executorHostname: qtausc-pphd0177.hadoop.local
15/06/03 10:34:26 INFO yarn.ExecutorRunnable: Starting Executor Container
15/06/03 10:34:26 INFO yarn.YarnAllocator: Received 1 containers from YARN,
launching executors on 1 of them.
15/06/03 10:34:26 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-nodemanagers-proxies : 500
15/06/03 10:34:26 INFO yarn.ExecutorRunnable: Setting up
ContainerLaunchContext
15/06/03 10:34:26 INFO yarn.ExecutorRunnable: Preparing Local resources
15/06/03 10:34:26 INFO yarn.ExecutorRunnable: Prepared Local resources
Map(__spark__.jar -> resource { scheme: "maprfs" port: -1 file:
"/user/nw/.sparkStaging/application_1432690361766_0596/spark-assembly-1.3.1-hadoop2.5.1-mapr-1501.jar"
} size: 130013450 timestamp: 1433291656330 type: FILE visibility: PRIVATE)
15/06/03 10:34:26 INFO yarn.ExecutorRunnable: Setting up executor with
environment: Map(CLASSPATH ->
{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>/opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
SPARK_LOG_URL_STDERR ->
http://qtausc-pphd0177.hadoop.local:8042/node/containerlogs/container_1432690361766_0596_01_000003/nw/stderr?start=0,
SPARK_DIST_CLASSPATH ->
/opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1432690361766_0596,
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 130013450, SPARK_USER -> nw,
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE, SPARK_YARN_MODE -> true,
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1433291656330, SPARK_LOG_URL_STDOUT
->
http://qtausc-pphd0177.hadoop.local:8042/node/containerlogs/container_1432690361766_0596_01_000003/nw/stdout?start=0,
SPARK_YARN_CACHE_FILES ->
maprfs:/user/nw/.sparkStaging/application_1432690361766_0596/spark-assembly-1.3.1-hadoop2.5.1-mapr-1501.jar#__spark__.jar)
15/06/03 10:34:26 INFO yarn.ExecutorRunnable: Setting up executor with
commands: List({{JAVA_HOME}}/bin/java, -server,
-XX:OnOutOfMemoryError='kill %p', -Xms1024m, -Xmx1024m,
-Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=36387',
-Dspark.yarn.app.container.log.dir=<LOG_DIR>,
org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url,
akka.tcp://sparkDriver@edge-node-77.skynet.hadoop:36387/user/CoarseGrainedScheduler,
--executor-id, 2, --hostname, qtausc-pphd0177.hadoop.local, --cores, 1,
--app-id, application_1432690361766_0596, --user-class-path,
file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
15/06/03 10:34:26 INFO impl.ContainerManagementProtocolProxy: Opening proxy
: qtausc-pphd0177.hadoop.local:40237
15/06/03 10:34:31 INFO impl.AMRMClientImpl: Received new token for :
qtausc-pphd0132.hadoop.local:44108
15/06/03 10:34:31 INFO yarn.YarnAllocator: Received 1 containers from YARN,
launching executors on 0 of them.

On Wed, Jun 3, 2015 at 10:29 AM, Night Wolf <nightwolf...@gmail.com> wrote:

> Hi all,
>
> Trying out Spark 1.4 on MapR Hadoop 2.5.1 running in yarn-client mode.
> Seems the application master doesn't work anymore, I get a 500 connect
> refused, even when I hit the IP/port of the spark UI directly. The logs
> don't show much.
>
> I build spark with Java 6, hive & scala 2.10 and 2.11. I've tried with and
> without -Phadoop-provided
>
> *Build command;*
>
> ./make-distribution.sh --name mapr4.0.2_yarn_j6_2.10 --tgz -Pyarn -Pmapr4
> -Phadoop-2.4 -Pmapr4 -Phive -Phadoop-provided
> -Dhadoop.version=2.5.1-mapr-1501 -Dyarn.version=2.5.1-mapr-1501 -DskipTests
> -e -X
>
> *Logs from spark shell;*
>
> 15/06/03 00:10:56 INFO server.AbstractConnector: Started
> SelectChannelConnector@0.0.0.0:4040
> 15/06/03 00:10:56 INFO util.Utils: Successfully started service 'SparkUI'
> on port 4040.
> 15/06/03 00:10:56 INFO ui.SparkUI: Started SparkUI at
> http://172.31.10.14:4040
> 15/06/03 00:10:57 INFO yarn.Client: Requesting a new application from
> cluster with 71 NodeManagers
> 15/06/03 00:10:57 INFO yarn.Client: Verifying our application has not
> requested more than the maximum memory capability of the cluster (112640 MB
> per container)
> 15/06/03 00:10:57 INFO yarn.Client: Will allocate AM container, with 896
> MB memory including 384 MB overhead
> 15/06/03 00:10:57 INFO yarn.Client: Setting up container launch context
> for our AM
> 15/06/03 00:10:57 INFO yarn.Client: Preparing resources for our AM
> container
> 15/06/03 00:10:57 INFO yarn.Client: Uploading resource
> file:///apps/spark/spark-1.4.0-SNAPSHOT-bin-mapr4.0.2_yarn_j6_2.11/lib/spark-assembly-1.4.0-SNAPSHOT-hadoop2.5.1-mapr-1501.jar
> ->
> maprfs:/user/nw/.sparkStaging/application_1432690361766_0593/spark-assembly-1.4.0-SNAPSHOT-hadoop2.5.1-mapr-1501.jar
> 15/06/03 00:10:58 INFO yarn.Client: Uploading resource
> file:/tmp/spark-5e42f904-ff83-4c93-bd35-4c3e20226a8a/__hadoop_conf__983379693214711.zip
> ->
> maprfs:/user/nw/.sparkStaging/application_1432690361766_0593/__hadoop_conf__983379693214711.zip
> 15/06/03 00:10:58 INFO yarn.Client: Setting up the launch environment for
> our AM container
> 15/06/03 00:10:58 INFO spark.SecurityManager: Changing view acls to: nw
> 15/06/03 00:10:58 INFO spark.SecurityManager: Changing modify acls to: nw
> 15/06/03 00:10:58 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users with view permissions:
> Set(nw); users with modify permissions: Set(nw)
> 15/06/03 00:10:58 INFO yarn.Client: Submitting application 593 to
> ResourceManager
> 15/06/03 00:10:58 INFO security.ExternalTokenManagerFactory: Initialized
> external token manager class -
> com.mapr.hadoop.yarn.security.MapRTicketManager
> 15/06/03 00:10:58 INFO impl.YarnClientImpl: Submitted application
> application_1432690361766_0593
> 15/06/03 00:10:59 INFO yarn.Client: Application report for
> application_1432690361766_0593 (state: ACCEPTED)
> 15/06/03 00:10:59 INFO yarn.Client:
>  client token: N/A
>  diagnostics: N/A
>  ApplicationMaster host: N/A
>  ApplicationMaster RPC port: -1
>  queue: default
>  start time: 1433290258143
>  final status: UNDEFINED
>  tracking URL:
> http://qtausc-pphd0101.hadoop.local:8088/proxy/application_1432690361766_0593/
>  user: nw
> 15/06/03 00:11:00 INFO yarn.Client: Application report for
> application_1432690361766_0593 (state: ACCEPTED)
> 15/06/03 00:11:01 INFO yarn.Client: Application report for
> application_1432690361766_0593 (state: ACCEPTED)
> 15/06/03 00:11:02 INFO yarn.Client: Application report for
> application_1432690361766_0593 (state: ACCEPTED)
> 15/06/03 00:11:03 INFO yarn.Client: Application report for
> application_1432690361766_0593 (state: ACCEPTED)
> 15/06/03 00:11:03 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
> ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://
> sparkYarnAM@192.168.81.167:36542/user/YarnAM#1631897818])
> 15/06/03 00:11:03 INFO cluster.YarnClientSchedulerBackend: Add WebUI
> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
> Map(PROXY_HOSTS -> qtausc-pphd0167.hadoop.local, PROXY_URI_BASES ->
> http://qtausc-pphd0167.hadoop.local:8088/proxy/application_1432690361766_0593),
> /proxy/application_1432690361766_0593
> 15/06/03 00:11:03 INFO ui.JettyUtils: Adding filter:
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 15/06/03 00:11:04 INFO yarn.Client: Application report for
> application_1432690361766_0593 (state: RUNNING)
> 15/06/03 00:11:04 INFO yarn.Client:
>  client token: N/A
>  diagnostics: N/A
>  ApplicationMaster host: 192.168.81.167
>  ApplicationMaster RPC port: 0
>  queue: default
>  start time: 1433290258143
>  final status: UNDEFINED
>  tracking URL:
> http://qtausc-pphd0101.hadoop.local:8088/proxy/application_1432690361766_0593/
>  user: nw
> 15/06/03 00:11:04 INFO cluster.YarnClientSchedulerBackend: Application
> application_1432690361766_0593 has started running.
> 15/06/03 00:11:04 INFO util.Utils: Successfully started service
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45668.
> 15/06/03 00:11:04 INFO netty.NettyBlockTransferService: Server created on
> 45668
> 15/06/03 00:11:04 INFO storage.BlockManagerMaster: Trying to register
> BlockManager
> 15/06/03 00:11:04 INFO storage.BlockManagerMasterEndpoint: Registering
> block manager 172.31.10.14:45668 with 265.4 MB RAM,
> BlockManagerId(driver, 172.31.10.14, 45668)
> 15/06/03 00:11:04 INFO storage.BlockManagerMaster: Registered BlockManager
>
>
> *Logs from AM logs page in YARN;*
> 15/06/03 10:11:01 INFO yarn.ApplicationMaster: Registered signal handlers
> for [TERM, HUP, INT]
> 15/06/03 10:11:02 INFO yarn.ApplicationMaster: ApplicationAttemptId:
> appattempt_1432690361766_0593_000001
> 15/06/03 10:11:02 INFO spark.SecurityManager: Changing view acls to: nw
> 15/06/03 10:11:02 INFO spark.SecurityManager: Changing modify acls to: nw
> 15/06/03 10:11:02 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users with view permissions:
> Set(nw); users with modify permissions: Set(nw)
> 15/06/03 10:11:03 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/06/03 10:11:03 INFO Remoting: Starting remoting
> 15/06/03 10:11:03 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkYarnAM@192.168.81.167:36542]
> 15/06/03 10:11:03 INFO util.Utils: Successfully started service
> 'sparkYarnAM' on port 36542.
> 15/06/03 10:11:03 INFO yarn.ApplicationMaster: Waiting for Spark driver to
> be reachable.
> 15/06/03 10:11:03 INFO yarn.ApplicationMaster: Driver now available:
> 172.31.10.14:59954
> 15/06/03 10:11:03 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI
> Filter.
> AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS
> -> qtausc-pphd0167.hadoop.local, PROXY_URI_BASES ->
> http://qtausc-pphd0167.hadoop.local:8088/proxy/application_1432690361766_0593
> ),/proxy/application_1432690361766_0593)
> 15/06/03 10:11:03 INFO yarn.YarnRMClient: Registering the ApplicationMaster
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Will request 2 executor
> containers, each with 1 cores and 1408 MB memory including 384 MB overhead
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Container request (host: Any,
> capability: <memory:1408, vCores:1, disks:0.0>)
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Container request (host: Any,
> capability: <memory:1408, vCores:1, disks:0.0>)
> 15/06/03 10:11:04 INFO yarn.ApplicationMaster: Started progress reporter
> thread with (heartbeat : 3000, initial allocation : 200) intervals
> 15/06/03 10:11:04 INFO impl.AMRMClientImpl: Received new token for :
> qtausc-pphd0146.hadoop.local:55935
> 15/06/03 10:11:04 INFO impl.AMRMClientImpl: Received new token for :
> qtausc-pphd0155.hadoop.local:45589
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Launching container
> container_1432690361766_0593_01_000002 for on host
> qtausc-pphd0146.hadoop.local
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Launching ExecutorRunnable.
> driverUrl: akka.tcp://
> sparkDriver@172.31.10.14:59954/user/CoarseGrainedScheduler,
>  executorHostname: qtausc-pphd0146.hadoop.local
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Launching container
> container_1432690361766_0593_01_000003 for on host
> qtausc-pphd0155.hadoop.local
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Starting Executor Container
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Launching ExecutorRunnable.
> driverUrl: akka.tcp://
> sparkDriver@172.31.10.14:59954/user/CoarseGrainedScheduler,
>  executorHostname: qtausc-pphd0155.hadoop.local
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Starting Executor Container
> 15/06/03 10:11:04 INFO yarn.YarnAllocator: Received 2 containers from
> YARN, launching executors on 2 of them.
> 15/06/03 10:11:04 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 15/06/03 10:11:04 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Setting up
> ContainerLaunchContext
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Setting up
> ContainerLaunchContext
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Preparing Local resources
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Preparing Local resources
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Prepared Local resources
> Map(__spark__.jar -> resource { scheme: "maprfs" port: -1 file:
> "/user/nw/.sparkStaging/application_1432690361766_0593/spark-assembly-1.4.0-SNAPSHOT-hadoop2.5.1-mapr-1501.jar"
> } size: 124419029 timestamp: 1433290257972 type: FILE visibility: PRIVATE)
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Prepared Local resources
> Map(__spark__.jar -> resource { scheme: "maprfs" port: -1 file:
> "/user/nw/.sparkStaging/application_1432690361766_0593/spark-assembly-1.4.0-SNAPSHOT-hadoop2.5.1-mapr-1501.jar"
> } size: 124419029 timestamp: 1433290257972 type: FILE visibility: PRIVATE)
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Setting up executor with
> environment: Map(CLASSPATH ->
> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>/opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
> SPARK_LOG_URL_STDERR ->
> http://qtausc-pphd0155.hadoop.local:8042/node/containerlogs/container_1432690361766_0593_01_000003/nw/stderr?start=0,
> SPARK_DIST_CLASSPATH ->
> /opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
> SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1432690361766_0593,
> SPARK_YARN_CACHE_FILES_FILE_SIZES -> 124419029, SPARK_USER -> nw,
> SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE, SPARK_YARN_MODE -> true,
> SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1433290257972, SPARK_LOG_URL_STDOUT
> ->
> http://qtausc-pphd0155.hadoop.local:8042/node/containerlogs/container_1432690361766_0593_01_000003/nw/stdout?start=0,
> SPARK_YARN_CACHE_FILES ->
> maprfs:/user/nw/.sparkStaging/application_1432690361766_0593/spark-assembly-1.4.0-SNAPSHOT-hadoop2.5.1-mapr-1501.jar#__spark__.jar)
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Setting up executor with
> environment: Map(CLASSPATH ->
> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>/opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
> SPARK_LOG_URL_STDERR ->
> http://qtausc-pphd0146.hadoop.local:8042/node/containerlogs/container_1432690361766_0593_01_000002/nw/stderr?start=0,
> SPARK_DIST_CLASSPATH ->
> /opt/mapr/lib/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/*:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/common/lib/*:/opt/mapr/hive/hive-current/lib/*,
> SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1432690361766_0593,
> SPARK_YARN_CACHE_FILES_FILE_SIZES -> 124419029, SPARK_USER -> nw,
> SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE, SPARK_YARN_MODE -> true,
> SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1433290257972, SPARK_LOG_URL_STDOUT
> ->
> http://qtausc-pphd0146.hadoop.local:8042/node/containerlogs/container_1432690361766_0593_01_000002/nw/stdout?start=0,
> SPARK_YARN_CACHE_FILES ->
> maprfs:/user/nw/.sparkStaging/application_1432690361766_0593/spark-assembly-1.4.0-SNAPSHOT-hadoop2.5.1-mapr-1501.jar#__spark__.jar)
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Setting up executor with
> commands: List({{JAVA_HOME}}/bin/java, -server,
> -XX:OnOutOfMemoryError='kill %p', -Xms1024m, -Xmx1024m,
> -Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=59954',
> -Dspark.yarn.app.container.log.dir=<LOG_DIR>,
> org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url,
> akka.tcp://sparkDriver@172.31.10.14:59954/user/CoarseGrainedScheduler,
> --executor-id, 2, --hostname, qtausc-pphd0155.hadoop.local, --cores, 1,
> --app-id, application_1432690361766_0593, --user-class-path,
> file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
> 15/06/03 10:11:04 INFO yarn.ExecutorRunnable: Setting up executor with
> commands: List({{JAVA_HOME}}/bin/java, -server,
> -XX:OnOutOfMemoryError='kill %p', -Xms1024m, -Xmx1024m,
> -Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=59954',
> -Dspark.yarn.app.container.log.dir=<LOG_DIR>,
> org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url,
> akka.tcp://sparkDriver@172.31.10.14:59954/user/CoarseGrainedScheduler,
> --executor-id, 1, --hostname, qtausc-pphd0146.hadoop.local, --cores, 1,
> --app-id, application_1432690361766_0593, --user-class-path,
> file:$PWD/__app__.jar, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
> 15/06/03 10:11:04 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : qtausc-pphd0155.hadoop.local:45589
> 15/06/03 10:11:04 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : qtausc-pphd0146.hadoop.local:55935
>
>
> Any ideas what the problem is?
>
> Cheers,
> ~NW
>
>

Reply via email to