I generally recommend people use the HQL dialect provided by the
HiveContext when possible:
http://spark.apache.org/docs/latest/sql-programming-guide.html#getting-started

I'll also note that this is distinct from the Hive on Spark project, which
is based on the Hive query optimizer / execution engine instead of the
catalyst optimizer that is shipped with Spark.

On Thu, Jan 22, 2015 at 3:12 AM, Niranda Perera <niranda.per...@gmail.com>
wrote:

> Hi,
>
> would like to know if there is an update on this?
>
> rgds
>
> On Mon, Jan 12, 2015 at 10:44 AM, Niranda Perera <niranda.per...@gmail.com
> > wrote:
>
>> Hi,
>>
>> I found out that SparkSQL supports only a relatively small subset of SQL
>> dialect currently.
>>
>> I would like to know the roadmap for the coming releases.
>>
>> And, are you focusing more on popularizing the 'Hive on Spark' SQL
>> dialect or the Spark SQL dialect?
>>
>> Rgds
>> --
>> Niranda
>>
>
>
>
> --
> Niranda
>

Reply via email to