Check once whether yarn resource manager is working or not.

On Dec 4, 2017 11:15 AM, "Kumar, Manoj H" <manoj.h.ku...@jpmorgan.com>
wrote:

> Thanks for your inputs.. Now I am getting below – Retrying connect to
> server ResourceManager – Do you have any idea about this?
>
>
>
> 17/12/04 00:43:11 INFO SparkEnv: Registering OutputCommitCoordinator
>
> 17/12/04 00:43:12 INFO Utils: Successfully started service 'SparkUI' on
> port 4040.
>
> 17/12/04 00:43:12 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at
> http://169.84.229.53:4040
>
> 17/12/04 00:43:12 INFO SparkContext: Added JAR file:/apps/rft/rcmo/apps/
> kylin/kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/lib/kylin-job-2.1.0.jar
> at spark://169.84.229.53:36821/jars/kylin-job-2.1.0.jar with timestamp
> 1512366192196
>
> 17/12/04 00:43:12 INFO RMProxy: Connecting to ResourceManager at /
> 0.0.0.0:8032
>
> 17/12/04 00:43:13 INFO Client: Retrying connect to server:
> 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
>
> 17/12/04 00:43:14 INFO Client: Retrying connect to server:
> 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
>
> 17/12/04 00:43:15 INFO Client: Retrying connect to server:
> 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)
>
>
>
> Spark Submit Command –
>
>
>
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-
> 2.1.0-KYLIN-2846-cdh57/spark/bin/spark-submit --class
> org.apache.kylin.common.util.SparkEntry --conf
> spark.yarn.queue=RCMO_Pool  --conf spark.history.fs.logDirectory=
> hdfs://sfpdev/tenants/rft/rcmo/kylin/spark-history --conf
> spark.master=yarn --conf spark-conf.spark.eventLog.enabled=true --conf
> spark.hadoop.yarn.timeline-service.enabled=false --conf
> spark.eventLog.dir=hdfs://sfpdev/tenants/rft/rcmo/kylin/spark-history
> --jars /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-
> 2.1.0-KYLIN-2846-cdh57/spark/jars/htrace-core-3.0.4.jar,/
> opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/htrace-
> core-3.2.0-incubating.jar,/opt/cloudera/parcels/CDH-5.9.
> 1-1.cdh5.9.1.p0.4/jars/hbase-client-1.2.0-cdh5.9.1.jar,/
> opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/hbase-
> common-1.2.0-cdh5.9.1.jar,/opt/cloudera/parcels/CDH-5.9.
> 1-1.cdh5.9.1.p0.4/jars/hbase-protocol-1.2.0-cdh5.9.1.jar,/
> opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/
> metrics-core-2.2.0.jar,/opt/cloudera/parcels/CDH-5.9.1-1.
> cdh5.9.1.p0.4/jars/guava-12.0.1.jar, /apps/rft/rcmo/apps/kylin/
> kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/lib/kylin-job-2.1.0.jar
> -className org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable
> db_rft_rcmo_rfda.kylin_intermediate_drr_cube_saprk_1_
> b956efc7_d830_452d_bc4d_2886cf1fd19e -segmentId
> b956efc7-d830-452d-bc4d-2886cf1fd19e -confPath /apps/rft/rcmo/apps/kylin/
> kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/conf -output
> hdfs://sfpdev/tenants/rft/rcmo/kylin/ns_rft_rcmo_creg_
> poc-kylin_metadata/kylin-c94a7679-47b7-4aae-9f44-
> 5d89c683b3b3/DRR_CUBE_SAPRK_1/cuboid/ -cubename DRR_CUBE_SAPRK_1
>
>
>
>
>
>
>
> Regards,
>
> Manoj
>
>
>
> *From:* prasanna lakshmi [mailtoprasannapadar...@gmail.com]
> *Sent:* Monday, December 04, 2017 10:10 AM
> *To:* user@kylin.apache.org
> *Subject:* Re: Apache kylin 2.1 on Spark
>
>
>
> Hi Manoj,
>
>
>
> This is my sample query when i am using kylin2.10 and above.
>
>
>
> export HADOOP_CONF_DIR=/usr/hdp/2.4.3.0-227/hadoop/conf &&
> /usr/local/kylin/spark/bin/spark-submit --class
> org.apache.kylin.common.util.SparkEntry --conf spark.master=yarn --conf
> spark.hadoop.yarn.timeline-service.enabled=false --jars
> /usr/local/kylin/spark/jars/htrace-core-3.0.4.jar,/usr/
> hdp/2.4.3.0-227/hbase/lib/htrace-core-3.1.0-incubating.
> jar,/usr/hdp/2.4.3.0-227/hbase/lib/metrics-core-2.2.0.
> jar,/usr/hdp/2.4.3.0-227/hbase/lib/guava-12.0.1.jar,
> /usr/local/kylin/lib/kylin-job-2.2.0.jar -className
> org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable
> default.kylin_intermediate_test_cube_88bdaa8c_f414_42a4_aa5e_ef5b2d2d5c31
> -output hdfs://tbdcrajkot/kylin/kylin_metadata/kylin-69f60d7b-c2e3-
> 4e6e-b6c5-92b45b34f0a4/test_cube/cuboid/ -segmentId
> 88bdaa8c-f414-42a4-aa5e-ef5b2d2d5c31 -metaUrl kylin_metadata@hdfs
> ,path=hdfs://hadoop_cluster_name/kylin/kylin_metadata/metadata/
> 88bdaa8c-f414-42a4-aa5e-ef5b2d2d5c31 -cubename test_cube
>
>
>
> Here I am using HDP  version  and I removed all other configuratin
> settings except these two
>
> --conf spark.master=yarn
>
>  --conf spark.hadoop.yarn.timeline-service.enabled=false
>
>
>
>  Take your command and try it out with these properties ,may be it will
> help full to you.
>
>
>
> On Mon, Dec 4, 2017 at 9:46 AM, Kumar, Manoj H <manoj.h.ku...@jpmorgan.com>
> wrote:
>
>
>
> Thanks to both of you for your advice on this. I tried out with command
> line but its getting error out – Pls. advise. From Jar file
> kylin-job-2.1.0.jar, These class file needs to be read.
>
>
>
> SparkCubingByLayer
>
> SparkEntry
>
>
>
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-
> 2.1.0-KYLIN-2846-cdh57/spark/bin/spark-submit --class
> org.apache.kylin.common.util.SparkEntry  --conf
> spark.history.fs.logDirectory=hdfs://sfpdev/tenants/rft/rcmo/kylin/spark-history
> --conf spark.eventLog.dir=hdfs://sfpdev/tenants/rft/rcmo/kylin/spark-history
> --conf spark.yarn.queue=RCMO_Pool  --conf spark.master=yarn  --jars
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-
> 2.1.0-KYLIN-2846-cdh57/lib/kylin-job-2.1.0.jar -className
> org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable
> db_rft_rcmo_rfda.kylin_intermediate_drr_cube_saprk_2_
> b9d800a6_5e7e_46dc_847b_b2925369b574 -segmentId 
> b9d800a6-5e7e-46dc-847b-b2925369b574
> -confPath 
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/conf
> -output hdfs://sfpdev/tenants/rft/rcmo/kylin/ns_rft_rcmo_creg_
> poc-kylin_metadata/kylin-65963dfd-393e-4198-9dd2-
> 51d54a6bf6c4/DRR_CUBE_SAPRK_2/cuboid/ -cubename DRR_CUBE_SAPRK_2
>
>
>
> Error: Unrecognized option: -className
>
>
>
> Regards,
>
> Manoj
>
>
>
> *From:* prasanna lakshmi [mailtoprasannapadar...@gmail.com]
> *Sent:* Saturday, December 02, 2017 10:29 PM
> *To:* user@kylin.apache.org
> *Subject:* Re: Apache kylin 2.1 on Spark
>
>
>
> Hi Manoj
>
> As part of my experience try the same command in prompt by using
> configuration spark in yarn mode only. Remove all other conf settings in
> that command. if it is successful then you comment all those properties in
> kylin.properties file of kylin/conf folder.
>
>
>
> On Nov 30, 2017 3:01 PM, "Kumar, Manoj H" <manoj.h.ku...@jpmorgan.com>
> wrote:
>
> Pls. advise on this as I am running Cube building process using Spark
> Engine. What setting is missing here?
>
>
>
> kylin.env.hadoop-conf-dir=/etc/hive/conf
>
> 224 #
>
> 225 ## Estimate the RDD partition numbers
>
> 226 kylin.engine.spark.rdd-partition-cut-mb=100
>
> 227 #
>
> 228 ## Minimal partition numbers of rdd
>
> 229 kylin.engine.spark.min-partition=1
>
> 230 #
>
> 231 ## Max partition numbers of rdd
>
> 232 kylin.engine.spark.max-partition=5000
>
> 233 #
>
> 234 ## Spark conf (default is in spark/conf/spark-defaults.conf)
>
> 235 kylin.engine.spark-conf.spark.master=yarn
>
> 236 kylin.engine.spark-conf.spark.submit.deployMode=cluster
>
> 237 kylin.engine.spark-conf.spark.yarn.queue=RCMO_Pool
>
> 238 kylin.engine.spark-conf.spark.executor.memory=4G
>
> 239 kylin.engine.spark-conf.spark.executor.cores=2
>
> 240 kylin.engine.spark-conf.spark.executor.instances=1
>
> 241 kylin.engine.spark-conf.spark.eventLog.enabled=true
>
> 242 kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/
> spark-history
>
> 243 kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:
> ///kylin/spark-history
>
> 244 kylin.engine.spark-conf.spark.hadoop.yarn.timeline-service.
> enabled=false
>
> 245 #
>
> 246 ## manually upload spark-assembly jar to HDFS and then set this
> property will avoid repeatedly uploading jar at runtime
>
> 247 ##kylin.engine.spark-conf.spark.yarn.jar=hdfs://
> namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
>
> 248 kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io
> .SnappyCompressionCodec
>
>
>
>
>
>
>
> 2017-11-30 04:16:50,156 ERROR [Job 50b5d7ce-35e6-438d-94f9-0b969adfc1bb-192]
> spark.SparkExecutable:133 : error run spark job:
>
> 5207 java.io.IOException: OS command error exit with 1 -- export
> HADOOP_CONF_DIR=/etc/hive/conf && /apps/rft/rcmo/apps/kylin/
> kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/spark/bin/sp
> ark-submit --class org.apache.kylin.common.util.SparkEntry  --conf
> spark.executor.instances=1  --conf spark.yarn.queue=RCMO_Pool  --conf
> spark.history.fs.logDirectory=hdfs:///kylin/spa     rk-history  --conf
> spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
> --conf spark.master=yarn  --conf 
> spark.hadoop.yarn.timeline-service.enabled=false
> --conf spar     k.executor.memory=4G  --conf spark.eventLog.enabled=true
> --conf spark.eventLog.dir=hdfs:///kylin/spark-history  --conf
> spark.executor.cores=2  --conf spark.submit.deployMode=cluster -
> -files /etc/hbase/conf.cloudera.hbase/hbase-site.xml --jars
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-
> 2.1.0-KYLIN-2846-cdh57/spark/jars/htrace-core-3.0.4.jar,/opt/cloudera
> /parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/htrace-core-3.2.0-
> incubating.jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.
> p0.4/jars/hbase-client-1.2.0-cdh5.9.1.jar,/opt/cloudera/parcels/CDH-
> 5.9.1-1.cdh5.9.1.p0.4/jars/hbase-common-1.2.0-cdh5.9.1.
> jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/
> hbase-protocol-1.2.0-cdh5.9.1.jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5
> .9.1.p0.4/jars/metrics-core-2.2.0.jar,/opt/cloudera/parcels/
> CDH-5.9.1-1.cdh5.9.1.p0.4/jars/guava-12.0.1.jar,
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh
> 57/lib/kylin-job-2.1.0.jar -className 
> org.apache.kylin.engine.spark.SparkCubingByLayer
> -hiveTable db_rft_rcmo_rfda.kylin_intermediate_drr_cube_saprk_
> 15128ffc_67b5_476b_a942_48645346b64     f -segmentId
> 15128ffc-67b5-476b-a942-48645346b64f -confPath /apps/rft/rcmo/apps/kylin/
> kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/conf -output
> hdfs://sfpdev/tenants/rft/rcmo/ky     lin/ns_rft_rcmo_creg_poc-
> kylin_metadata/kylin-50b5d7ce-35e6-438d-94f9-0b969adfc1bb/DRR_CUBE_SAPRK/cuboid/
> -cubename DRR_CUBE_SAPRK
>
> 5208 17/11/30 04:16:10 INFO client.ConfiguredRMFailoverProxyProvider:
> Failing over to rm76
>
> 5209 17/11/30 04:16:11 INFO yarn.Client: Requesting a new application from
> cluster with 19 NodeManagers
>
> 5210 17/11/30 04:16:11 INFO yarn.Client: Verifying our application has not
> requested more than the maximum memory capability of the cluster (272850 MB
> per container)
>
> 5211 17/11/30 04:16:11 INFO yarn.Client: Will allocate AM container, with
> 1408 MB memory including 384 MB overhead
>
> 5212 17/11/30 04:16:11 INFO yarn.Client: Setting up container launch
> context for our AM
>
> 5213 17/11/30 04:16:11 INFO yarn.Client: Setting up the launch environment
> for our AM container
>
> 5214 17/11/30 04:16:11 INFO yarn.Client: Preparing resources for our AM
> container
>
> 5215 17/11/30 04:16:11 INFO security.HDFSCredentialProvider: getting
> token for namenode: hdfs://sfpdev/user/a_rcmo_nd
>
> 5216 17/11/30 04:16:11 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN
> token 704970 for a_rcmo_nd on ha-hdfs:sfpdev
>
> 5217 17/11/30 04:16:14 INFO hive.metastore: Trying to connect to metastore
> with URI thrift://bdtpisr3n1.svr.us.jpmchase.net:9083
>
> 5218 17/11/30 04:16:14 INFO hive.metastore: Connected to metastore.
>
> 5219 17/11/30 04:16:15 WARN token.Token: Cannot find class for token kind
> HIVE_DELEGATION_TOKEN
>
> 5220 17/11/30 04:16:15 INFO security.HiveCredentialProvider: Get Token
> from hive metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 25
> 61 5f 72 63 6d 6f 5f 6e 64 40 4e 41 45 41 53      54 2e 41 44 2e 4a 50 4d
> 4f 52 47 41 4e 43 48 41 53 45 2e 43 4f 4d 04 68 69 76 65 00 8a 01 60 0c 36
> 56 a7 8a 01 60 30 42 da a7 8e 39 8a 22
>
> 5221 17/11/30 04:16:15 WARN yarn.Client: Neither spark.yarn.jars nor
> spark.yarn.archive is set, falling back to uploading libraries under
> SPARK_HOME.
>
> 5222 17/11/30 04:16:19 INFO yarn.Client: Uploading resource
> file:/tmp/spark-8cfdcc86-1bf2-4baf-b4c1-4f776c708490/__spark_libs__4437186145258387608.zip
> -> hdfs://sfpdev/user/a_rcmo_nd/.spark     Staging/application_
> 1509132635807_32198/__spark_libs__4437186145258387608.zip
>
> 5223 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/apps/rft/rcmo/apps/kylin/kylin_namespace/apache-
> kylin-2.1.0-KYLIN-2846-cdh57/lib/kylin-job-2.1.0.jar ->
> hdfs://sfpdev/user/     a_rcmo_nd/.sparkStaging/application_1509132635807_
> 32198/kylin-job-2.1.0.jar
>
> 5224 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/apps/rft/rcmo/apps/kylin/kylin_namespace/apache-
> kylin-2.1.0-KYLIN-2846-cdh57/spark/jars/htrace-core-3.0.4.jar ->
> hdfs://sfp     dev/user/a_rcmo_nd/.sparkStaging/application_
> 1509132635807_32198/htrace-core-3.0.4.jar
>
> 5225 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/htrace-core-3.2.0-incubating.jar -> hdfs://sfpdev/user/a_rcmo_nd/.spark
> Staging/application_1509132635807_32198/htrace-core-3.2.0-incubating.jar
>
> 5226 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/hbase-client-1.2.0-cdh5.9.1.jar -> hdfs://sfpdev/user/a_rcmo_nd/.sparkS
> taging/application_1509132635807_32198/hbase-client-1.2.0-cdh5.9.1.jar
>
> 5227 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/hbase-common-1.2.0-cdh5.9.1.jar -> hdfs://sfpdev/user/a_rcmo_nd/.sparkS
> taging/application_1509132635807_32198/hbase-common-1.2.0-cdh5.9.1.jar
>
> 5228 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/hbase-protocol-1.2.0-cdh5.9.1.jar -> hdfs://sfpdev/user/a_rcmo_nd/.spar
> kStaging/application_1509132635807_32198/hbase-protocol-1.2.0-cdh5.9.1.jar
>
> 5229 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/metrics-core-2.2.0.jar
> -> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/ap
> plication_1509132635807_32198/metrics-core-2.2.0.jar
>
>
>
> 9 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/metrics-core-2.2.0.jar
> -> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/ap
> plication_1509132635807_32198/metrics-core-2.2.0.jar
>
>
>
> 5230 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/guava-12.0.1.jar
> -> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/applicat
> ion_1509132635807_32198/guava-12.0.1.jar
>
>
>
> 5231 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/etc/hbase/conf.cloudera.hbase/hbase-site.xml ->
> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/application_1509132635807_32198/
> hbase-site.xml
>
>
>
> 5232 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/tmp/spark-8cfdcc86-1bf2-4baf-b4c1-4f776c708490/__spark_conf__5140031382599260375.zip
> -> hdfs://sfpdev/user/a_rcmo_nd/.spark     Staging/application_
> 1509132635807_32198/__spark_conf__.zip
>
> 5233 17/11/30 04:16:22 INFO spark.SecurityManager: Changing view acls to:
> a_rcmo_nd
>
> 5234 17/11/30 04:16:22 INFO spark.SecurityManager: Changing modify acls
> to: a_rcmo_nd
>
> 5235 17/11/30 04:16:22 INFO spark.SecurityManager: Changing view acls
> groups to:
>
> 5236 17/11/30 04:16:22 INFO spark.SecurityManager: Changing modify acls
> groups to:
>
>
>
> 5237 17/11/30 04:16:22 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users  with view permissions:
> Set(a_rcmo_nd); groups with view permissions: Se     t(); users  with
> modify permissions: Set(a_rcmo_nd); groups with modify permissions: Set()
>
> 5238 17/11/30 04:16:22 INFO yarn.Client: Submitting application
> application_1509132635807_32198 to ResourceManager
>
> 5239 17/11/30 04:16:22 INFO impl.YarnClientImpl: Submitted application
> application_1509132635807_32198
>
> 5240 17/11/30 04:16:23 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5241 17/11/30 04:16:23 INFO yarn.Client:
>
> 5242          client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
>
> 5243          diagnostics: N/A
>
> 5244          ApplicationMaster host: N/A
>
> 5245          ApplicationMaster RPC port: -1
>
> 5246          queue: root.RCMO_Pool
>
> 5247          start time: 1512033382431
>
> 5248          final status: UNDEFINED
>
> 5249          tracking URL: http://bdtpisr3n2.svr.us.
> jpmchase.net:8088/proxy/application_1509132635807_32198/
>
> 5250          user: a_rcmo_nd
>
> 5251 17/11/30 04:16:24 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5252 17/11/30 04:16:25 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5253 17/11/30 04:16:26 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5254 17/11/30 04:16:27 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5255 17/11/30 04:16:28 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5256 17/11/30 04:16:29 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5257 17/11/30 04:16:30 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5258 17/11/30 04:16:31 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5259 17/11/30 04:16:32 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5260 17/11/30 04:16:33 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5261 17/11/30 04:16:34 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5262 17/11/30 04:16:35 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5263 17/11/30 04:16:36 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5264 17/11/30 04:16:37 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5265 17/11/30 04:16:38 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
>
>
> 5266 17/11/30 04:16:39 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5276 17/11/30 04:16:49 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: FAILED)
>
> 5277 17/11/30 04:16:49 INFO yarn.Client:
>
> 5278          client token: N/A
>
> 5279          diagnostics: Application application_1509132635807_32198
> failed 2 times due to AM Container for appattempt_1509132635807_32198_000002
> exited with  exitCode: 15
>
> 5280 For more detailed output, check application tracking page:
> http://bdtpisr3n2.svr.us.jpmchase.net:8088/proxy/
> application_1509132635807_32198/Then, click on links to logs of each
> attempt.
>
> 5281 Diagnostics: Exception from container-launch.
>
> 5282 Container id: container_e113_1509132635807_32198_02_000001
>
> 5283 Exit code: 15
>
> 5284 Stack trace: ExitCodeException exitCode=15:
>
> 5285         at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
>
> 5286         at org.apache.hadoop.util.Shell.run(Shell.java:504)
>
> 5287         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:786)
>
> 5288         at org.apache.hadoop.yarn.server.nodemanager.
> LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:373)
>
> 5289         at org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch.call(
> ContainerLaunch.java:302)
>
> 5290         at org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch.call(
> ContainerLaunch.java:82)
>
> 5291         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
> 5292         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>
> 5293         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>
> 5294         at java.lang.Thread.run(Thread.java:745)
>
> 5295
>
> 5296 Shell output: main : command provided 1
>
>
>
>
>
> Regards,
>
> Manoj
>
>
>
> This message is confidential and subject to terms at: http://
> www.jpmorgan.com/emaildisclaimer including on confidentiality, legal
> privilege, viruses and monitoring of electronic messages. If you are not
> the intended recipient, please delete this message and notify the sender
> immediately. Any unauthorized use is strictly prohibited.
>
> This message is confidential and subject to terms at: http://
> www.jpmorgan.com/emaildisclaimer including on confidentiality, legal
> privilege, viruses and monitoring of electronic messages. If you are not
> the intended recipient, please delete this message and notify the sender
> immediately. Any unauthorized use is strictly prohibited.
>
>
>
> This message is confidential and subject to terms at: http://
> www.jpmorgan.com/emaildisclaimer including on confidentiality, legal
> privilege, viruses and monitoring of electronic messages. If you are not
> the intended recipient, please delete this message and notify the sender
> immediately. Any unauthorized use is strictly prohibited.
>

Reply via email to