[ 
https://issues.apache.org/jira/browse/SPARK-17722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17420999#comment-17420999
 ] 

Davide Benedetto commented on SPARK-17722:
------------------------------------------

Hi Partha
I have your same issue. Which yarn configurations have you set?



> YarnScheduler: Initial job has not accepted any resources
> ---------------------------------------------------------
>
>                 Key: SPARK-17722
>                 URL: https://issues.apache.org/jira/browse/SPARK-17722
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Partha Pratim Ghosh
>            Priority: Major
>
> Connected spark in yarn mode from eclipse java. On trying to run task it is 
> giving the following - 
> YarnScheduler: Initial job has not accepted any resources; check your cluster 
> UI to ensure that workers are registered and have sufficient resources. The 
> request is going to Hadoop cluster scheduler and from there we can see the 
> job in spark UI. But there it is saying that no task has been assigned for 
> this.
> Same code is running from spark-submit where we need to remove the following 
> lines - 
> System.setProperty("java.security.krb5.conf", "C:\\xxx\\krb5.conf");
>               
>               org.apache.hadoop.conf.Configuration conf = new     
>               org.apache.hadoop.conf.Configuration();
>               conf.set("hadoop.security.authentication", "kerberos");
>               UserGroupInformation.setConfiguration(conf);
> Following is the configuration - 
> import org.apache.hadoop.security.UserGroupInformation;
> import org.apache.spark.SparkConf;
> import org.apache.spark.api.java.JavaRDD;
> import org.apache.spark.api.java.JavaSparkContext;
> import org.apache.spark.sql.DataFrame;
> import org.apache.spark.sql.SQLContext;
> public class TestConnectivity {
>       /**
>        * @param args
>        */
>       public static void main(String[] args) {
>               System.setProperty("java.security.krb5.conf", 
> "C:\\xxx\\krb5.conf");
>               
>               org.apache.hadoop.conf.Configuration conf = new     
>               org.apache.hadoop.conf.Configuration();
>               conf.set("hadoop.security.authentication", "kerberos");
>               UserGroupInformation.setConfiguration(conf);
>                SparkConf config = new SparkConf().setAppName("Test Spark ");
>                config = config.setMaster("yarn-client");
>                config .set("spark.dynamicAllocation.enabled", "false");
>                config.set("spark.executor.memory", "2g");
>                config.set("spark.executor.instances", "1");
>                config.set("spark.executor.cores", "2");
>                //config.set("spark.driver.memory", "2g");
>                //config.set("spark.driver.cores", "1");
>                /*config.set("spark.executor.am.memory", "2g");
>                config.set("spark.executor.am.cores", "2");*/
>                config.set("spark.cores.max", "4");
>                config.set("yarn.nodemanager.resource.cpu-vcores","4");
>                config.set("spark.yarn.queue","root.root");
>                /*config.set("spark.deploy.defaultCores", "2");
>                config.set("spark.task.cpus", "2");*/
>                config.set("spark.yarn.jar", 
> "file:/C:/xxx/spark-assembly_2.10-1.6.0-cdh5.7.1.jar");
>                   JavaSparkContext sc = new JavaSparkContext(config);
>                   SQLContext sqlcontext = new SQLContext(sc);
>                           DataFrame df = sqlcontext.jsonFile(logFile);
>                           JavaRDD<String> logData = 
> sc.textFile("sparkexamples/Employee.json").cache();
>                          DataFrame df = sqlcontext.jsonRDD(logData);
>                          
>                           df.show();
>                           df.printSchema();
>                   
>                   //UserGroupInformation.setConfiguration(conf);
>   
>       }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to