I based on "http://blog.cloudera.com/blog/2015/02/download-the-hive-on-spark-beta/" and "http://blog.cloudera.com/blog/2015/02/download-the-hive-on-spark-beta/ " Set parameters "spark.executor.instances = 12" (I have four nodes), when I execute hive sql, spark executors are always 3, 1 driver and 2 Executor.
Is this a bug? jack