Yes I forgot that anything with REPL both spark-sql and spark-shell are
simple convenience interfaces.

Thanks Saisai for pointing out.

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 30 June 2016 at 06:01, Saisai Shao <sai.sai.s...@gmail.com> wrote:

> I think you cannot use sql client in the cluster mode, also for
> spark-shell/pyspark which has a repl, all these application can only be
> started with client deploy mode.
>
> On Thu, Jun 30, 2016 at 12:46 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Hi,
>>
>> When you use spark-shell or for that matter spark-sql, you are staring
>> spark-submit under the bonnet. These two shells are created to make life
>> easier to work on Spark.
>>
>>
>> However, if you look at what $SPARK_HOME/bin/spark-sql does in the
>> script, you will notice my point:
>>
>>
>>
>> exec "${SPARK_HOME}"/bin/spark-submit --class
>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver "$@"
>>
>>  So that is basically spark-submit JVM
>>
>>
>>
>> Since it is using spark-submit it takes all the parameters related to
>> spark-submit. You can test it using your own customised shell script with
>> parameters passed.
>>
>>
>> ${SPARK_HOME}/bin/spark-submit \
>>                 --driver-memory xG \
>>                 --num-executors n \
>>                 --executor-memory xG \
>>                 --executor-cores m \
>>                 --master yarn \
>>                 --deploy-mode cluster \
>>
>>                 --class
>> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver "$@"
>>
>>
>> Does your version of spark support cluster mode?
>>
>>
>> HTH
>>
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 30 June 2016 at 05:16, Huang Meilong <ims...@outlook.com> wrote:
>>
>>> Hello,
>>>
>>>
>>> I added deploy-mode flag in spark-sql cli like this:
>>>
>>> $ spark-sql --deploy-mode cluster --master yarn -e "select * from mx"
>>>
>>>
>>> It showed error saying "Cluster deploy mode is not applicable to Spark
>>> SQL shell", but "spark-sql --help" shows "--deploy-mode" option. Is
>>> this a bug?
>>>
>>
>>
>

Reply via email to