这个问题是我因为附件的报错,参考网上,用2.4.8版本的spark的jar包导致的,感觉网上这个方法不适用,用之后执行进度还不如之前,我还用我自己的jar包了,附件这个问题还是没有解决,困扰了好几天了
















在 2022-04-20 14:26:38,"Yaqian Zhang" <yaqian_zh...@126.com> 写道:

Hi:


Maybe you can check if there is any abnormal output in the spark task log?



在 2022年4月20日,上午11:53,黄奇 <harlan86...@163.com> 写道:


kylin cube构建过程中,一直卡顿,集群资源是充足的,请问是什么原因会导致这样




 

<1650426555(1).jpg><1650426762(1).png>

org.apache.kylin.engine.spark.exception.SparkException: OS command error exit 
with return code: 1, error message: 22/04/14 15:56:36 WARN SparkConf: The 
configuration key 'spark.yarn.executor.memoryOverhead' has been deprecated as 
of Spark 2.3 and may be removed in the future. Please use the new key 
'spark.executor.memoryOverhead' instead.
SparkEntry args:-className org.apache.kylin.storage.hbase.steps.SparkCubeHFile 
-partitions 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/rowkey_stats/part-r-00000_hfile
 -counterOutput 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/counter
 -cubename cube_demo -output 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/hfile
 -input 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/cuboid/
 -segmentId f4dfb8fa-6d83-60d7-8dd1-798c5f17673a -metaUrl 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
 -hbaseConfPath 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/hbase-conf.xml
Running org.apache.kylin.storage.hbase.steps.SparkCubeHFile -partitions 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/rowkey_stats/part-r-00000_hfile
 -counterOutput 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/counter
 -cubename cube_demo -output 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/hfile
 -input 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/cuboid/
 -segmentId f4dfb8fa-6d83-60d7-8dd1-798c5f17673a -metaUrl 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
 -hbaseConfPath 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/hbase-conf.xml
22/04/14 15:56:37 WARN SparkConf: The configuration key 
'spark.yarn.executor.memoryOverhead' has been deprecated as of Spark 2.3 and 
may be removed in the future. Please use the new key 
'spark.executor.memoryOverhead' instead.
22/04/14 15:56:37 INFO SparkContext: Running Spark version 2.3.2.3.1.4.0-315
22/04/14 15:56:37 INFO SparkContext: Submitted application: Converting HFile 
for:cube_demo segment f4dfb8fa-6d83-60d7-8dd1-798c5f17673a
22/04/14 15:56:37 INFO SecurityManager: Changing view acls to: hdp
22/04/14 15:56:37 INFO SecurityManager: Changing modify acls to: hdp
22/04/14 15:56:37 INFO SecurityManager: Changing view acls groups to: 
22/04/14 15:56:37 INFO SecurityManager: Changing modify acls groups to: 
22/04/14 15:56:37 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(hdp); groups with 
view permissions: Set(); users  with modify permissions: Set(hdp); groups with 
modify permissions: Set()
22/04/14 15:56:38 INFO Utils: Successfully started service 'sparkDriver' on 
port 33470.
22/04/14 15:56:38 INFO SparkEnv: Registering MapOutputTracker
22/04/14 15:56:38 INFO SparkEnv: Registering BlockManagerMaster
22/04/14 15:56:38 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/04/14 15:56:38 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/04/14 15:56:38 INFO DiskBlockManager: Created local directory at 
/tmp/blockmgr-ecb466b0-14ce-4069-b7ae-3e5395d127c7
22/04/14 15:56:38 INFO MemoryStore: MemoryStore started with capacity 912.3 MB
22/04/14 15:56:38 INFO SparkEnv: Registering OutputCommitCoordinator
22/04/14 15:56:38 INFO log: Logging initialized @2412ms
22/04/14 15:56:38 INFO Server: jetty-9.3.z-SNAPSHOT, build timestamp: 
2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
22/04/14 15:56:38 INFO Server: Started @2516ms
22/04/14 15:56:38 WARN Utils: Service 'SparkUI' could not bind on port 4040. 
Attempting port 4041.
22/04/14 15:56:38 INFO AbstractConnector: Started 
ServerConnector@32fdec40{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
22/04/14 15:56:38 INFO Utils: Successfully started service 'SparkUI' on port 
4041.
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@7fae4d4a{/jobs,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4ee33af7{/jobs/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6b04acb2{/jobs/job,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4a60ee36{/jobs/job/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4cfbaf4{/stages,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@58faa93b{/stages/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@5f212d84{/stages/stage,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6622a690{/stages/stage/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@30b9eadd{/stages/pool,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@497570fb{/stages/pool/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@412c995d{/storage,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@3249a1ce{/storage/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4dd94a58{/storage/rdd,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2f4919b0{/storage/rdd/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@a8a8b75{/environment,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@75b21c3b{/environment/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@72be135f{/executors,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@155d1021{/executors/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4bd2f0dc{/executors/threadDump,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2e647e59{/executors/threadDump/json,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2c42b421{/static,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6e78fcf5{/,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@56febdc{/api,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@5300f14a{/jobs/job/kill,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@1f86099a{/stages/stage/kill,null,AVAILABLE,@Spark}
22/04/14 15:56:38 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://c1:4041
22/04/14 15:56:38 INFO SparkContext: Added JAR 
file:/data/workspace_tools/kylin/kylin-3.1.3/lib/kylin-job-3.1.3.jar at 
spark://c1:33470/jars/kylin-job-3.1.3.jar with timestamp 1649922998648
22/04/14 15:56:39 INFO ConfiguredRMFailoverProxyProvider: Failing over to rm2
22/04/14 15:56:39 INFO Client: Requesting a new application from cluster with 3 
NodeManagers
22/04/14 15:56:39 INFO Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.1.4.0-315/0/resource-types.xml
22/04/14 15:56:39 INFO Client: Verifying our application has not requested more 
than the maximum memory capability of the cluster (24576 MB per container)
22/04/14 15:56:39 INFO Client: Will allocate AM container, with 896 MB memory 
including 384 MB overhead
22/04/14 15:56:39 INFO Client: Setting up container launch context for our AM
22/04/14 15:56:39 INFO Client: Setting up the launch environment for our AM 
container
22/04/14 15:56:39 INFO Client: Preparing resources for our AM container
22/04/14 15:56:40 INFO Client: Use hdfs cache file as spark.yarn.archive for 
HDP, 
hdfsCacheFile:hdfs://testcluster/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-yarn-archive.tar.gz
22/04/14 15:56:40 INFO Client: Source and destination file systems are the 
same. Not copying 
hdfs://testcluster/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-yarn-archive.tar.gz
22/04/14 15:56:40 INFO Client: Distribute hdfs cache file as 
spark.sql.hive.metastore.jars for HDP, 
hdfsCacheFile:hdfs://testcluster/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-hive-archive.tar.gz
22/04/14 15:56:40 INFO Client: Source and destination file systems are the 
same. Not copying 
hdfs://testcluster/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-hive-archive.tar.gz
22/04/14 15:56:40 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-common-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-common-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-mapreduce-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-mapreduce-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-client-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-client-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-protocol-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-protocol-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-hadoop-compat-2.0.2.3.1.4.0-315.jar 
-> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-hadoop-compat-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/htrace-core-3.2.0-incubating.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/htrace-core-3.2.0-incubating.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/data/workspace_tools/kylin/kylin-3.1.3/tomcat/webapps/kylin/WEB-INF/lib/metrics-core-2.2.0.jar
 -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/metrics-core-2.2.0.jar
22/04/14 15:56:41 WARN Client: Same path resource 
file:///usr/hdp/3.1.4.0-315/hbase/lib/hbase-hadoop-compat-2.0.2.3.1.4.0-315.jar 
added multiple times to distributed cache.
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-hadoop2-compat-2.0.2.3.1.4.0-315.jar 
-> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-hadoop2-compat-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-server-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-server-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-shaded-miscellaneous-2.2.0.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-shaded-miscellaneous-2.2.0.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-metrics-api-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-metrics-api-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-metrics-2.0.2.3.1.4.0-315.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-metrics-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-shaded-protobuf-2.2.0.jar -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-shaded-protobuf-2.2.0.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/usr/hdp/3.1.4.0-315/hbase/lib/hbase-protocol-shaded-2.0.2.3.1.4.0-315.jar 
-> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/hbase-protocol-shaded-2.0.2.3.1.4.0-315.jar
22/04/14 15:56:41 INFO Client: Uploading resource 
file:/tmp/spark-076a5958-751c-4d30-ba40-f2bbc8ebc25e/__spark_conf__4629126606577555021.zip
 -> 
hdfs://testcluster/user/hdp/.sparkStaging/application_1648895287036_0022/__spark_conf__.zip
22/04/14 15:56:41 INFO SecurityManager: Changing view acls to: hdp
22/04/14 15:56:41 INFO SecurityManager: Changing modify acls to: hdp
22/04/14 15:56:41 INFO SecurityManager: Changing view acls groups to: 
22/04/14 15:56:41 INFO SecurityManager: Changing modify acls groups to: 
22/04/14 15:56:41 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(hdp); groups with 
view permissions: Set(); users  with modify permissions: Set(hdp); groups with 
modify permissions: Set()
22/04/14 15:56:41 INFO Client: Submitting application 
application_1648895287036_0022 to ResourceManager
22/04/14 15:56:42 INFO YarnClientImpl: Submitted application 
application_1648895287036_0022
22/04/14 15:56:42 INFO SchedulerExtensionServices: Starting Yarn extension 
services with app application_1648895287036_0022 and attemptId None
22/04/14 15:56:43 INFO Client: Application report for 
application_1648895287036_0022 (state: ACCEPTED)
22/04/14 15:56:43 INFO Client: 
         client token: N/A
         diagnostics: AM container is launched, waiting for AM container to 
Register with RM
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1649923001977
         final status: UNDEFINED
         tracking URL: http://c2:8088/proxy/application_1648895287036_0022/
         user: hdp
22/04/14 15:56:44 INFO Client: Application report for 
application_1648895287036_0022 (state: ACCEPTED)
22/04/14 15:56:45 INFO Client: Application report for 
application_1648895287036_0022 (state: ACCEPTED)
22/04/14 15:56:46 INFO YarnClientSchedulerBackend: Add WebUI Filter. 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> 
c0,c2, PROXY_URI_BASES -> 
http://c0:8088/proxy/application_1648895287036_0022,http://c2:8088/proxy/application_1648895287036_0022,
 RM_HA_URLS -> c0:8088,c2:8088), /proxy/application_1648895287036_0022
22/04/14 15:56:46 INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, 
/jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, 
/stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, 
/storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, 
/executors/json, /executors/threadDump, /executors/threadDump/json, /static, /, 
/api, /jobs/job/kill, /stages/stage/kill.
22/04/14 15:56:46 INFO Client: Application report for 
application_1648895287036_0022 (state: ACCEPTED)
22/04/14 15:56:46 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: 
ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
22/04/14 15:56:47 INFO Client: Application report for 
application_1648895287036_0022 (state: RUNNING)
22/04/14 15:56:47 INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 192.168.70.97
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1649923001977
         final status: UNDEFINED
         tracking URL: http://c2:8088/proxy/application_1648895287036_0022/
         user: hdp
22/04/14 15:56:47 INFO YarnClientSchedulerBackend: Application 
application_1648895287036_0022 has started running.
22/04/14 15:56:47 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 46237.
22/04/14 15:56:47 INFO NettyBlockTransferService: Server created on c1:46237
22/04/14 15:56:47 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
22/04/14 15:56:47 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, c1, 46237, None)
22/04/14 15:56:47 INFO BlockManagerMasterEndpoint: Registering block manager 
c1:46237 with 912.3 MB RAM, BlockManagerId(driver, c1, 46237, None)
22/04/14 15:56:47 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, c1, 46237, None)
22/04/14 15:56:47 INFO BlockManager: external shuffle service port = 7447
22/04/14 15:56:47 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, c1, 46237, None)
22/04/14 15:56:47 INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.
22/04/14 15:56:47 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@d8835af{/metrics/json,null,AVAILABLE,@Spark}
22/04/14 15:56:47 INFO EventLoggingListener: Logging events to 
hdfs:/kylin/spark-history/application_1648895287036_0022
22/04/14 15:56:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.96:52884) 
with ID 1
22/04/14 15:56:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.95:40946) 
with ID 2
22/04/14 15:56:50 INFO BlockManagerMasterEndpoint: Registering block manager 
c1:39668 with 2004.6 MB RAM, BlockManagerId(1, c1, 39668, None)
22/04/14 15:56:50 INFO BlockManagerMasterEndpoint: Registering block manager 
c0:37524 with 2004.6 MB RAM, BlockManagerId(2, c0, 37524, None)
22/04/14 15:56:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.97:48160) 
with ID 3
22/04/14 15:56:50 INFO BlockManagerMasterEndpoint: Registering block manager 
c2:35369 with 2004.6 MB RAM, BlockManagerId(3, c2, 35369, None)
22/04/14 15:56:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.96:52888) 
with ID 4
22/04/14 15:56:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.95:40950) 
with ID 5
22/04/14 15:56:51 INFO BlockManagerMasterEndpoint: Registering block manager 
c1:42713 with 2004.6 MB RAM, BlockManagerId(4, c1, 42713, None)
22/04/14 15:56:51 INFO BlockManagerMasterEndpoint: Registering block manager 
c0:35342 with 2004.6 MB RAM, BlockManagerId(5, c0, 35342, None)
22/04/14 15:56:52 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.97:48180) 
with ID 9
22/04/14 15:56:52 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.97:48182) 
with ID 6
22/04/14 15:56:52 INFO BlockManagerMasterEndpoint: Registering block manager 
c2:43273 with 2004.6 MB RAM, BlockManagerId(9, c2, 43273, None)
22/04/14 15:56:52 INFO BlockManagerMasterEndpoint: Registering block manager 
c2:33316 with 2004.6 MB RAM, BlockManagerId(6, c2, 33316, None)
22/04/14 15:56:52 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.95:40964) 
with ID 8
22/04/14 15:56:52 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.96:52904) 
with ID 7
22/04/14 15:56:52 INFO BlockManagerMasterEndpoint: Registering block manager 
c1:43121 with 2004.6 MB RAM, BlockManagerId(7, c1, 43121, None)
22/04/14 15:56:52 INFO BlockManagerMasterEndpoint: Registering block manager 
c0:37205 with 2004.6 MB RAM, BlockManagerId(8, c0, 37205, None)
22/04/14 15:56:55 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.97:48216) 
with ID 12
22/04/14 15:56:55 INFO BlockManagerMasterEndpoint: Registering block manager 
c2:40221 with 2004.6 MB RAM, BlockManagerId(12, c2, 40221, None)
22/04/14 15:56:55 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.95:40990) 
with ID 11
22/04/14 15:56:55 INFO BlockManagerMasterEndpoint: Registering block manager 
c0:43680 with 2004.6 MB RAM, BlockManagerId(11, c0, 43680, None)
22/04/14 15:56:55 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.96:52928) 
with ID 13
22/04/14 15:56:55 INFO BlockManagerMasterEndpoint: Registering block manager 
c1:33063 with 2004.6 MB RAM, BlockManagerId(13, c1, 33063, None)
22/04/14 15:56:55 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.96:52930) 
with ID 10
22/04/14 15:56:55 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (192.168.70.95:40994) 
with ID 14
22/04/14 15:56:55 INFO BlockManagerMasterEndpoint: Registering block manager 
c1:40841 with 2004.6 MB RAM, BlockManagerId(10, c1, 40841, None)
22/04/14 15:56:55 INFO BlockManagerMasterEndpoint: Registering block manager 
c0:33271 with 2004.6 MB RAM, BlockManagerId(14, c0, 33271, None)
22/04/14 15:57:08 INFO YarnClientSchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 
30000(ms)
22/04/14 15:57:08 INFO AbstractHadoopJob: Ready to load KylinConfig from uri: 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
22/04/14 15:57:08 INFO KylinConfig: Creating new manager instance of class 
org.apache.kylin.cube.CubeManager
22/04/14 15:57:08 INFO CubeManager: Initializing CubeManager with config 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
22/04/14 15:57:08 INFO ResourceStore: Using metadata url 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
 for resource store
22/04/14 15:57:08 INFO HDFSResourceStore: hdfs meta path : 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
22/04/14 15:57:09 INFO KylinConfig: Creating new manager instance of class 
org.apache.kylin.cube.CubeDescManager
22/04/14 15:57:09 INFO CubeDescManager: Initializing CubeDescManager with 
config 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
22/04/14 15:57:09 INFO KylinConfig: Creating new manager instance of class 
org.apache.kylin.metadata.project.ProjectManager
22/04/14 15:57:09 INFO ProjectManager: Initializing ProjectManager with 
metadata url 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
22/04/14 15:57:09 INFO KylinConfig: Creating new manager instance of class 
org.apache.kylin.metadata.cachesync.Broadcaster
22/04/14 15:57:09 INFO KylinConfig: Creating new manager instance of class 
org.apache.kylin.metadata.model.DataModelManager
22/04/14 15:57:09 INFO KylinConfig: Creating new manager instance of class 
org.apache.kylin.metadata.TableMetadataManager
22/04/14 15:57:09 INFO MeasureTypeFactory: Checking custom measure types from 
kylin config
22/04/14 15:57:09 INFO MeasureTypeFactory: registering COUNT_DISTINCT(hllc), 
class org.apache.kylin.measure.hllc.HLLCMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering COUNT_DISTINCT(bitmap), 
class org.apache.kylin.measure.bitmap.BitmapMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering TOP_N(topn), class 
org.apache.kylin.measure.topn.TopNMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering RAW(raw), class 
org.apache.kylin.measure.raw.RawMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering 
EXTENDED_COLUMN(extendedcolumn), class 
org.apache.kylin.measure.extendedcolumn.ExtendedColumnMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering 
PERCENTILE_APPROX(percentile), class 
org.apache.kylin.measure.percentile.PercentileMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering COUNT_DISTINCT(dim_dc), 
class org.apache.kylin.measure.dim.DimCountDistinctMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering STDDEV_SUM(stddev_sum), 
class org.apache.kylin.measure.stddev.StdDevSumMeasureType$Factory
22/04/14 15:57:09 INFO MeasureTypeFactory: registering 
COUNT_DISTINCT(bitmap_map), class 
org.apache.kylin.measure.map.bitmap.BitmapMapMeasureType$Factory
22/04/14 15:57:09 INFO SparkCubeHFile: Input path: 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/cuboid/
22/04/14 15:57:09 INFO SparkCubeHFile: Output path: 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/hfile
22/04/14 15:57:09 INFO ZlibFactory: Successfully loaded & initialized 
native-zlib library
22/04/14 15:57:09 INFO CodecPool: Got brand-new decompressor [.deflate]
22/04/14 15:57:09 INFO SparkCubeHFile: There are 0 split keys, totally 1 hfiles
22/04/14 15:57:09 INFO SparkCubeHFile: Loading HBase configuration 
from:hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/hbase-conf.xml
22/04/14 15:57:10 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 355.9 KB, free 912.0 MB)
22/04/14 15:57:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 30.8 KB, free 911.9 MB)
22/04/14 15:57:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
c1:46237 (size: 30.8 KB, free: 912.3 MB)
22/04/14 15:57:10 INFO SparkContext: Created broadcast 0 from sequenceFile at 
SparkUtil.java:133
22/04/14 15:57:10 INFO FileOutputCommitter: File Output Committer Algorithm 
version is 2
22/04/14 15:57:10 INFO FileOutputCommitter: FileOutputCommitter skip cleanup 
_temporary folders under output directory:false, ignore cleanup failures: false
22/04/14 15:57:10 INFO SparkContext: Starting job: runJob at 
SparkHadoopWriter.scala:78
22/04/14 15:57:10 INFO FileInputFormat: Total input files to process : 5
22/04/14 15:57:10 INFO DAGScheduler: Registering RDD 1 (flatMapToPair at 
SparkCubeHFile.java:208)
22/04/14 15:57:10 INFO DAGScheduler: Got job 0 (runJob at 
SparkHadoopWriter.scala:78) with 1 output partitions
22/04/14 15:57:10 INFO DAGScheduler: Final stage: ResultStage 1 (runJob at 
SparkHadoopWriter.scala:78)
22/04/14 15:57:10 INFO DAGScheduler: Parents of final stage: 
List(ShuffleMapStage 0)
22/04/14 15:57:10 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
22/04/14 15:57:10 INFO DAGScheduler: Submitting ShuffleMapStage 0 
(MapPartitionsRDD[1] at flatMapToPair at SparkCubeHFile.java:208), which has no 
missing parents
22/04/14 15:57:10 INFO MemoryStore: Block broadcast_1 stored as values in 
memory (estimated size 27.9 KB, free 911.9 MB)
22/04/14 15:57:10 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in 
memory (estimated size 14.4 KB, free 911.9 MB)
22/04/14 15:57:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
c1:46237 (size: 14.4 KB, free: 912.3 MB)
22/04/14 15:57:11 INFO SparkContext: Created broadcast 1 from broadcast at 
DAGScheduler.scala:1039
22/04/14 15:57:11 INFO DAGScheduler: Submitting 5 missing tasks from 
ShuffleMapStage 0 (MapPartitionsRDD[1] at flatMapToPair at 
SparkCubeHFile.java:208) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 
4))
22/04/14 15:57:11 INFO YarnScheduler: Adding task set 0.0 with 5 tasks
22/04/14 15:57:11 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 0, 
c2, executor 12, partition 2, NODE_LOCAL, 7974 bytes)
22/04/14 15:57:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 1, 
c1, executor 10, partition 0, NODE_LOCAL, 7974 bytes)
22/04/14 15:57:11 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 2, 
c2, executor 3, partition 3, NODE_LOCAL, 7974 bytes)
22/04/14 15:57:11 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 3, 
c1, executor 13, partition 1, NODE_LOCAL, 7974 bytes)
22/04/14 15:57:11 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 
c1, executor 1, partition 4, NODE_LOCAL, 7978 bytes)
22/04/14 15:57:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
c1:39668 (size: 14.4 KB, free: 2004.6 MB)
22/04/14 15:57:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
c1:40841 (size: 14.4 KB, free: 2004.6 MB)
22/04/14 15:57:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
c1:33063 (size: 14.4 KB, free: 2004.6 MB)
22/04/14 15:57:12 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
c2:35369 (size: 14.4 KB, free: 2004.6 MB)
22/04/14 15:57:12 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
c2:40221 (size: 14.4 KB, free: 2004.6 MB)
22/04/14 15:57:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
c1:33063 (size: 30.8 KB, free: 2004.6 MB)
22/04/14 15:57:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
c1:40841 (size: 30.8 KB, free: 2004.6 MB)
22/04/14 15:57:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
c1:39668 (size: 30.8 KB, free: 2004.6 MB)
22/04/14 15:57:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
c2:40221 (size: 30.8 KB, free: 2004.6 MB)
22/04/14 15:57:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
c2:35369 (size: 30.8 KB, free: 2004.6 MB)
22/04/14 15:57:14 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 3) 
in 3052 ms on c1 (executor 13) (1/5)
22/04/14 15:57:14 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 1) 
in 3126 ms on c1 (executor 10) (2/5)
22/04/14 15:57:14 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) 
in 3210 ms on c1 (executor 1) (3/5)
22/04/14 15:57:14 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 0) 
in 3776 ms on c2 (executor 12) (4/5)
22/04/14 15:57:14 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 2) 
in 3786 ms on c2 (executor 3) (5/5)
22/04/14 15:57:14 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all 
completed, from pool 
22/04/14 15:57:14 INFO DAGScheduler: ShuffleMapStage 0 (flatMapToPair at 
SparkCubeHFile.java:208) finished in 3.908 s
22/04/14 15:57:14 INFO DAGScheduler: looking for newly runnable stages
22/04/14 15:57:14 INFO DAGScheduler: running: Set()
22/04/14 15:57:14 INFO DAGScheduler: waiting: Set(ResultStage 1)
22/04/14 15:57:14 INFO DAGScheduler: failed: Set()
22/04/14 15:57:14 INFO DAGScheduler: Submitting ResultStage 1 
(MapPartitionsRDD[3] at mapToPair at SparkCubeHFile.java:231), which has no 
missing parents
22/04/14 15:57:14 INFO MemoryStore: Block broadcast_2 stored as values in 
memory (estimated size 296.0 KB, free 911.6 MB)
22/04/14 15:57:14 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in 
memory (estimated size 54.4 KB, free 911.5 MB)
22/04/14 15:57:14 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
c1:46237 (size: 54.4 KB, free: 912.2 MB)
22/04/14 15:57:14 INFO SparkContext: Created broadcast 2 from broadcast at 
DAGScheduler.scala:1039
22/04/14 15:57:14 INFO DAGScheduler: Submitting 1 missing tasks from 
ResultStage 1 (MapPartitionsRDD[3] at mapToPair at SparkCubeHFile.java:231) 
(first 15 tasks are for partitions Vector(0))
22/04/14 15:57:14 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
22/04/14 15:57:14 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 5, 
c2, executor 6, partition 0, NODE_LOCAL, 7660 bytes)
22/04/14 15:57:15 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
c2:33316 (size: 54.4 KB, free: 2004.5 MB)
22/04/14 15:57:16 INFO MapOutputTrackerMasterEndpoint: Asked to send map output 
locations for shuffle 0 to 192.168.70.97:48182
22/04/14 15:57:18 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 5, c2, 
executor 6): org.apache.spark.SparkException: Task failed while writing rows
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.util.FSUtils.setStoragePolicy(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Ljava/lang/String;)V
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3.configureStoragePolicy(HFileOutputFormat3.java:468)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:287)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:243)
        at 
org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more

22/04/14 15:57:18 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 6, 
c1, executor 7, partition 0, NODE_LOCAL, 7660 bytes)
22/04/14 15:57:18 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
c1:43121 (size: 54.4 KB, free: 2004.5 MB)
22/04/14 15:57:19 INFO MapOutputTrackerMasterEndpoint: Asked to send map output 
locations for shuffle 0 to 192.168.70.96:52904
22/04/14 15:57:20 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 6) on 
c1, executor 7: org.apache.spark.SparkException (Task failed while writing 
rows) [duplicate 1]
22/04/14 15:57:20 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 7, 
c1, executor 13, partition 0, NODE_LOCAL, 7660 bytes)
22/04/14 15:57:20 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
c1:33063 (size: 54.4 KB, free: 2004.5 MB)
22/04/14 15:57:20 INFO MapOutputTrackerMasterEndpoint: Asked to send map output 
locations for shuffle 0 to 192.168.70.96:52928
22/04/14 15:57:21 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 7) on 
c1, executor 13: org.apache.spark.SparkException (Task failed while writing 
rows) [duplicate 2]
22/04/14 15:57:21 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 8, 
c1, executor 1, partition 0, NODE_LOCAL, 7660 bytes)
22/04/14 15:57:21 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 
c1:39668 (size: 54.4 KB, free: 2004.5 MB)
22/04/14 15:57:21 INFO MapOutputTrackerMasterEndpoint: Asked to send map output 
locations for shuffle 0 to 192.168.70.96:52884
22/04/14 15:57:22 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 8) on 
c1, executor 1: org.apache.spark.SparkException (Task failed while writing 
rows) [duplicate 3]
22/04/14 15:57:22 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; 
aborting job
22/04/14 15:57:22 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all 
completed, from pool 
22/04/14 15:57:22 INFO YarnScheduler: Cancelling stage 1
22/04/14 15:57:22 INFO DAGScheduler: ResultStage 1 (runJob at 
SparkHadoopWriter.scala:78) failed in 7.182 s due to Job aborted due to stage 
failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 
in stage 1.0 (TID 8, c1, executor 1): org.apache.spark.SparkException: Task 
failed while writing rows
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.util.FSUtils.setStoragePolicy(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Ljava/lang/String;)V
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3.configureStoragePolicy(HFileOutputFormat3.java:468)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:287)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:243)
        at 
org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more

Driver stacktrace:
22/04/14 15:57:22 INFO DAGScheduler: Job 0 failed: runJob at 
SparkHadoopWriter.scala:78, took 11.220101 s
22/04/14 15:57:22 ERROR SparkHadoopWriter: Aborting job job_20220414155710_0003.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 
8, c1, executor 1): org.apache.spark.SparkException: Task failed while writing 
rows
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.util.FSUtils.setStoragePolicy(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Ljava/lang/String;)V
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3.configureStoragePolicy(HFileOutputFormat3.java:468)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:287)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:243)
        at 
org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more

Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at scala.Option.foreach(Option.scala:257)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2039)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2060)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)
        at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
        at 
org.apache.kylin.storage.hbase.steps.SparkCubeHFile.execute(SparkCubeHFile.java:238)
        at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
        at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Task failed while writing rows
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.util.FSUtils.setStoragePolicy(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Ljava/lang/String;)V
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3.configureStoragePolicy(HFileOutputFormat3.java:468)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:287)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:243)
        at 
org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more
22/04/14 15:57:22 INFO AbstractConnector: Stopped 
Spark@32fdec40{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
22/04/14 15:57:22 INFO SparkUI: Stopped Spark web UI at http://c1:4041
22/04/14 15:57:22 INFO YarnClientSchedulerBackend: Interrupting monitor thread
22/04/14 15:57:22 INFO YarnClientSchedulerBackend: Shutting down all executors
22/04/14 15:57:22 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each 
executor to shut down
22/04/14 15:57:22 INFO SchedulerExtensionServices: Stopping 
SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
22/04/14 15:57:22 INFO YarnClientSchedulerBackend: Stopped
22/04/14 15:57:22 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
22/04/14 15:57:22 INFO MemoryStore: MemoryStore cleared
22/04/14 15:57:22 INFO BlockManager: BlockManager stopped
22/04/14 15:57:22 INFO BlockManagerMaster: BlockManagerMaster stopped
22/04/14 15:57:22 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
22/04/14 15:57:22 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.RuntimeException: error execute 
org.apache.kylin.storage.hbase.steps.SparkCubeHFile. Root cause: Job aborted.
        at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
        at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted.
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)
        at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
        at 
org.apache.kylin.storage.hbase.steps.SparkCubeHFile.execute(SparkCubeHFile.java:238)
        at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
        ... 11 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 
1.0 (TID 8, c1, executor 1): org.apache.spark.SparkException: Task failed while 
writing rows
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.util.FSUtils.setStoragePolicy(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Ljava/lang/String;)V
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3.configureStoragePolicy(HFileOutputFormat3.java:468)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:287)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:243)
        at 
org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more

Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
        at scala.Option.foreach(Option.scala:257)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2039)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2060)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
        ... 21 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.util.FSUtils.setStoragePolicy(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Ljava/lang/String;)V
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3.configureStoragePolicy(HFileOutputFormat3.java:468)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:287)
        at 
org.apache.kylin.storage.hbase.steps.HFileOutputFormat3$1.write(HFileOutputFormat3.java:243)
        at 
org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.write(SparkHadoopWriter.scala:356)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:130)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
        at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
        at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
        ... 8 more
22/04/14 15:57:22 INFO ShutdownHookManager: Shutdown hook called
22/04/14 15:57:22 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-fffac995-72ca-493c-9f26-108ae48d8cb2
22/04/14 15:57:22 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-076a5958-751c-4d30-ba40-f2bbc8ebc25e
The command is: 
export HADOOP_CONF_DIR=/usr/hdp/3.1.4.0-315/hadoop/conf && 
/usr/hdp/3.1.4.0-315/spark2/bin/spark-submit --class 
org.apache.kylin.common.util.SparkEntry --name "Convert Cuboid Data to HFile" 
--conf spark.executor.instances=40  --conf spark.yarn.queue=default  --conf 
spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf 
spark.master=yarn  --conf spark.hadoop.yarn.timeline-service.enabled=false  
--conf spark.executor.memory=4G  --conf spark.eventLog.enabled=true  --conf 
spark.eventLog.dir=hdfs:///kylin/spark-history  --conf 
spark.yarn.executor.memoryOverhead=1024  --conf spark.driver.memory=2G  --conf 
spark.shuffle.service.enabled=true --jars 
/usr/hdp/3.1.4.0-315/hbase/lib/hbase-common-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-mapreduce-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-client-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-protocol-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-hadoop-compat-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/htrace-core-3.2.0-incubating.jar,/data/workspace_tools/kylin/kylin-3.1.3/tomcat/webapps/kylin/WEB-INF/lib/metrics-core-2.2.0.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-hadoop-compat-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-hadoop2-compat-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-server-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-shaded-miscellaneous-2.2.0.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-metrics-api-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-metrics-2.0.2.3.1.4.0-315.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-shaded-protobuf-2.2.0.jar,/usr/hdp/3.1.4.0-315/hbase/lib/hbase-protocol-shaded-2.0.2.3.1.4.0-315.jar,
 /data/workspace_tools/kylin/kylin-3.1.3/lib/kylin-job-3.1.3.jar -className 
org.apache.kylin.storage.hbase.steps.SparkCubeHFile -partitions 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/rowkey_stats/part-r-00000_hfile
 -counterOutput 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/counter
 -cubename cube_demo -output 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/hfile
 -input 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/cuboid/
 -segmentId f4dfb8fa-6d83-60d7-8dd1-798c5f17673a -metaUrl 
kylin_metadata@hdfs,path=hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/cube_demo/metadata
 -hbaseConfPath 
hdfs://testcluster/kylin/kylin_metadata/kylin-e3613781-b0f4-aeaa-2e14-e59a540e2e91/hbase-conf.xml
        at 
org.apache.kylin.engine.spark.SparkExecutable.doWork(SparkExecutable.java:405)
        at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:180)
        at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:72)
        at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:180)
        at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:119)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Reply via email to