[jira] [Commented] (SPARK-16659) use Maven project to submit spark application via yarn-client

2016-07-22 Thread Jack Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389331#comment-15389331
 ] 

Jack Jiang commented on SPARK-16659:


i have fixed it

> use Maven project to submit spark application via yarn-client
> -
>
> Key: SPARK-16659
> URL: https://issues.apache.org/jira/browse/SPARK-16659
> Project: Spark
>  Issue Type: Question
>Reporter: Jack Jiang
>  Labels: newbie
>
> i want to use spark sql to execute hive sql in my maven project,here is the 
> main code:
>   System.setProperty("hadoop.home.dir",
>   "D:\\hadoop-common-2.2.0-bin-master");
>   SparkConf sparkConf = new SparkConf()
>   .setAppName("test").setMaster("yarn-client");
>   // .set("hive.metastore.uris", "thrift://172.30.115.59:9083");
>   SparkContext ctx = new SparkContext(sparkConf);
>   // ctx.addJar("lib/hive-hbase-handler-0.14.0.2.2.6.0-2800.jar");
>   HiveContext sqlContext = new 
> org.apache.spark.sql.hive.HiveContext(ctx);
>   String[] tables = sqlContext.tableNames();
>   for (String tablename : tables) {
>   System.out.println("tablename : " + tablename);
>   }
> when i run it,it comes to a error:
> 10:16:17,496  INFO Client:59 - 
>client token: N/A
>diagnostics: Application application_1468409747983_0280 failed 2 times 
> due to AM Container for appattempt_1468409747983_0280_02 exited with  
> exitCode: -1000
> For more detailed output, check application tracking 
> page:http://hadoop003.icccuat.com:8088/proxy/application_1468409747983_0280/Then,
>  click on links to logs of each attempt.
> Diagnostics: File 
> file:/C:/Users/uatxj990267/AppData/Local/Temp/spark-8874c486-893d-4ac3-a088-48e4cdb484e1/__spark_conf__9007071161920501082.zip
>  does not exist
> java.io.FileNotFoundException: File 
> file:/C:/Users/uatxj990267/AppData/Local/Temp/spark-8874c486-893d-4ac3-a088-48e4cdb484e1/__spark_conf__9007071161920501082.zip
>  does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:608)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:821)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:598)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
>   at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
>   at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Failing this attempt. Failing the application.
>ApplicationMaster host: N/A
>ApplicationMaster RPC port: -1
>queue: default
>start time: 1469067373412
>final status: FAILED
>tracking URL: 
> http://hadoop003.icccuat.com:8088/cluster/app/application_1468409747983_0280
>user: uatxj990267
> 10:16:17,496 ERROR SparkContext:96 - Error initializing SparkContext.
> org.apache.spark.SparkException: Yarn application has already ended! It might 
> have been killed or unable to launch application master.
>   at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:123)
>   at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
>   at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
>   at org.apache.spark.SparkContext.(SparkContext.scala:523)
>   at com.huateng.test.SparkSqlDemo.main(SparkSqlDemo.java:33)
> but when i change this code setMaster("yarn-client") to 
> setMaster(local[2]),it's OK?what's 

[jira] [Commented] (SPARK-16659) use Maven project to submit spark application via yarn-client

2016-07-20 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15387270#comment-15387270
 ] 

Sean Owen commented on SPARK-16659:
---

I think there is probably an earlier error here. This is also probably better 
as a mailing list question rather than JIRA.

> use Maven project to submit spark application via yarn-client
> -
>
> Key: SPARK-16659
> URL: https://issues.apache.org/jira/browse/SPARK-16659
> Project: Spark
>  Issue Type: Question
>Reporter: Jack Jiang
>  Labels: newbie
>
> i want to use spark sql to execute hive sql in my maven project,here is the 
> main code:
>   System.setProperty("hadoop.home.dir",
>   "D:\\hadoop-common-2.2.0-bin-master");
>   SparkConf sparkConf = new SparkConf()
>   .setAppName("test").setMaster("yarn-client");
>   // .set("hive.metastore.uris", "thrift://172.30.115.59:9083");
>   SparkContext ctx = new SparkContext(sparkConf);
>   // ctx.addJar("lib/hive-hbase-handler-0.14.0.2.2.6.0-2800.jar");
>   HiveContext sqlContext = new 
> org.apache.spark.sql.hive.HiveContext(ctx);
>   String[] tables = sqlContext.tableNames();
>   for (String tablename : tables) {
>   System.out.println("tablename : " + tablename);
>   }
> when i run it,it comes to a error:
> 10:16:17,496  INFO Client:59 - 
>client token: N/A
>diagnostics: Application application_1468409747983_0280 failed 2 times 
> due to AM Container for appattempt_1468409747983_0280_02 exited with  
> exitCode: -1000
> For more detailed output, check application tracking 
> page:http://hadoop003.icccuat.com:8088/proxy/application_1468409747983_0280/Then,
>  click on links to logs of each attempt.
> Diagnostics: File 
> file:/C:/Users/uatxj990267/AppData/Local/Temp/spark-8874c486-893d-4ac3-a088-48e4cdb484e1/__spark_conf__9007071161920501082.zip
>  does not exist
> java.io.FileNotFoundException: File 
> file:/C:/Users/uatxj990267/AppData/Local/Temp/spark-8874c486-893d-4ac3-a088-48e4cdb484e1/__spark_conf__9007071161920501082.zip
>  does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:608)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:821)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:598)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
>   at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
>   at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
>   at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Failing this attempt. Failing the application.
>ApplicationMaster host: N/A
>ApplicationMaster RPC port: -1
>queue: default
>start time: 1469067373412
>final status: FAILED
>tracking URL: 
> http://hadoop003.icccuat.com:8088/cluster/app/application_1468409747983_0280
>user: uatxj990267
> 10:16:17,496 ERROR SparkContext:96 - Error initializing SparkContext.
> org.apache.spark.SparkException: Yarn application has already ended! It might 
> have been killed or unable to launch application master.
>   at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:123)
>   at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
>   at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
>   at org.apache.spark.SparkContext.(SparkContext.scala:523)
>   at com.huateng.test.SparkSqlDemo.main(SparkSqlDemo