"Job has not accepted resources" is a well-known error message -- you can search the Internet. 2 common causes come to mind: 1) you already have an application connected to the master -- by default a driver will grab all resources so unless that application disconnects, nothing else is allowed to connect 2) All your workers are dead/disconnected and there are no resources for your master to allocate
As the error suggests "check your cluster UI to ensure that workers are registered and have sufficient resources". If you can't see what's wrong, maybe send a screenshot of your UI screen. But the error has nothing to do with Hive -- this is a spark-driver connecting to master issue The NativeCodeLoader warning is ignoreable On Fri, Oct 9, 2015 at 6:52 AM, vinayak <vinayak.si...@tcs.com> wrote: > Java code which I am trying to invoke. > > import org.apache.spark.SparkContext; > import org.apache.spark.api.java.JavaSparkContext; > import org.apache.spark.sql.DataFrame; > import org.apache.spark.sql.hive.HiveContext; > > public class SparkHiveInsertor { > > public static void main(String[] args) { > > SparkContext sctx=new SparkContext(); > System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> starting > "+sctx.isLocal()); > JavaSparkContext ctx=new JavaSparkContext(sctx); > HiveContext hiveCtx=new HiveContext(ctx.sc()); > DataFrame df= hiveCtx.sql("show tables"); > System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> count is > "+df.count()); > } > } > > command to submit job ./spark-submit --master spark://masterIp:7077 > --deploy-mode client --class com.ceg.spark.hive.sparkhive.SparkHiveInsertor > --executor-cores 2 --executor-memory 1gb > /home/someuser/Desktop/30sep2015/hivespark.jar > ------------------------------ > View this message in context: Re: spark-submit hive connection through > spark Initial job has not accepted any resources > <http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-hive-connection-through-spark-Initial-job-has-not-accepted-any-resources-tp24993p24994.html> > > Sent from the Apache Spark User List mailing list archive > <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com. >