vel of development and different
requirements (HBASE release version, Kerberos support etc)
_
From: Benjamin Kim <bbuil...@gmail.com<mailto:bbuil...@gmail.com>>
Sent: Saturday, October 8, 2016 11:26 AM
Subject: Re: Spark SQL Thriftserver with HBase
To: Mich Talebzad
M, Mich Talebzadeh <mich.talebza...@gmail.com>
>>>> wrote:
>>>>
>>>> Like any other design what is your presentation layer and end users?
>>>>
>>>> Are they SQL centric users from Tableau background or they may use
>>>> spark
m such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On 8 October 2016 at 20:15, Benjamin Kim <bbuil...@gmail.com> wrote:
>>>>>>>>>> Mich,
>>>>>>>>>>
>&
tter.
Without naming specifics, there are at least 4 or 5 different implementations
of HBASE sources, each at varying level of development and different
requirements (HBASE release version, Kerberos support etc)
_________
From: Benjamin Kim <bbuil...@gmail.com<mai
>,
"user@spark.apache.org<mailto:user@spark.apache.org>"
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Spark SQL Thriftserver with HBase
Instead of (or additionally to) saving results somewhere, you just start a
thriftserver that expose the Spark t
t;felixcheun...@hotmail.com<mailto:felixcheun...@hotmail.com>> wrote:
I wouldn't be too surprised Spark SQL - JDBC data source - Phoenix JDBC server
- HBASE would work better.
Without naming specifics, there are at least 4 or 5 different implementations
of HBASE sources, each at varying level
ts (HBASE release version, Kerberos support etc)
_________
From: Benjamin Kim <bbuil...@gmail.com<mailto:bbuil...@gmail.com>>
Sent: Saturday, October 8, 2016 11:26 AM
Subject: Re: Spark SQL Thriftserver with HBase
To: Mich Talebzadeh
<mich.talebza...@gmail.com<ma
, by using JDBC everywhere, it simplifies
>>>>>>>> and unifies the code on the JDBC industry standard.
>>>>>>>>
>>>>>>>> Does this make sense?
>>>>>>>>
>>>>>>>>
the same data-center, they will connected to a located database
>>>>>> server using JDBC. Either way, by using JDBC everywhere, it simplifies
>>>>>> and unifies the code on the JDBC industry standard.
>>>>>>
>>>>>> Does this make sense?
>>>>>>
>>>>&
2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility f
ibe the use case.
>
>
>
> HTH
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
&
<bbuil...@gmail.com>
Cc: Michael Segel <msegel_had...@hotmail.com>, Jörn Franke
<jornfra...@gmail.com>, Mich Talebzadeh <mich.talebza...@gmail.com>, Felix
Cheung <felixcheun...@hotmail.com>, "user@spark.apache.org"
<user@spark.apache.org>
Subject: Re: Spark SQL
g JDBC.
>>>>> Either way, by using JDBC everywhere, it simplifies and unifies the code
>>>>> on
>>>>> the JDBC industry standard.
>>>>>
>>>>> Does this make sense?
>>>>>
>>>>> Thanks,
>>>
gt;>>
>>>> Like any other design what is your presentation layer and end users?
>>>>
>>>> Are they SQL centric users from Tableau background or they may use
>>>> spark functional programming.
>>>>
>>>> It is best to describe
om
>>>>>> <mailto:mich.talebza...@gmail.com>> wrote:
>>>>>>
>>>>>> Like any other design what is your presentation layer and end users?
>>>>>>
>>>>>> Are they SQL centric users from Tableau back
AWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
__
From: Benjamin Kim <bbuil...@gmail.com<mailto:bbuil...@gmail.com>>
Sent: Saturday, October 8, 2016 11:26 AM
Subject: Re: Spark SQL Thriftserver with HBase
To: Mich Talebzadeh
<mich.talebza...@gmail.com<mailto:mich.talebza...@gmail.com>>
Cc: <us
tart STS, you pass hiveconf parameter to it
>>>
>>> ${SPARK_HOME}/sbin/start-thriftserver.sh \
>>> --master \
>>> --hiveconf hive.server2.thrift.port=10055 \
>>>
>>> and STS bypasses Spark optimiser and uses Hive opti
conf parameter to it
>>>
>>> ${SPARK_HOME}/sbin/start-thriftserver.sh \
>>> --master \
>>> --hiveconf hive.server2.thrift.port=10055 \
>>>
>>> and STS bypasses Spark optimiser and uses Hive optimizer and execution
>
ou much difference. Unless they
>> have recently changed the design of STS.
>>
>> HTH
>>
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>> LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>&g
tion of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
Disclaimer: Use it at your own risk. Any and all responsibility for any loss,
> damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary dama
damage or destruction.
On 13 September 2016 at 22:32, Benjamin Kim <bbuil...@gmail.com> wrote:
> Does anyone have any thoughts about using Spark SQL Thriftserver in Spark
> 1.6.2 instead of HiveServer2? We are considering abandoning
Does anyone have any thoughts about using Spark SQL Thriftserver in Spark 1.6.2
instead of HiveServer2? We are considering abandoning HiveServer2 for it. Some
advice and gotcha’s would be nice to know.
Thanks,
Ben
SQL Thrift server programmatically.
>
> Mohammed
>
> -Original Message-
> From: ReeceRobinson [mailto:re...@therobinsons.gen.nz]
> Sent: Sunday, October 18, 2015 8:05 PM
> To: user@spark.apache.org
> Subject: Spark SQL Thriftserver and Hive UDF in Production
>
&
ine client?
>>
>> Another alternative would be to create a Spark SQL UDF and launch the
>> Spark SQL Thrift server programmatically.
>>
>> Mohammed
>>
>> -Original Message-
>> From: ReeceRobinson [mailto:re...@therobinsons.gen.nz]
>> Sen
Does anyone have some advice on the best way to deploy a Hive UDF for use
with a Spark SQL Thriftserver where the client is Tableau using Simba ODBC
Spark SQL driver.
I have seen the hive documentation that provides an example of creating the
function using a hive client ie: CREATE FUNCTION
, 2015 8:05 PM
To: user@spark.apache.org
Subject: Spark SQL Thriftserver and Hive UDF in Production
Does anyone have some advice on the best way to deploy a Hive UDF for use with
a Spark SQL Thriftserver where the client is Tableau using Simba ODBC Spark SQL
driver.
I have seen the hive
://issues.apache.org/jira/browse/SPARK-5159
I don't think it is yet supported until the HS2 code base is updated in
Spark hive-thriftserver project.
--
Date: Fri, 1 May 2015 15:56:30 +1000
Subject: Spark SQL ThriftServer Impersonation Support
From: nightwolf...@gmail.com
Hi guys,
Trying to use the SparkSQL Thriftserver with hive metastore. It seems that
hive meta impersonation works fine (when running Hive tasks). However
spinning up SparkSQL thrift server, impersonation doesn't seem to work...
What settings do I need to enable impersonation?
I've copied the
Hello,
We're running a spark sql thriftserver that several users connect to with
beeline. One limitation we've run into is that the current working database
(set with use db) is shared across all connections. So changing the
database on one connection changes the database for all connections
if you could share any findings on that JIRA. Thanks!
On Mon, Nov 17, 2014 at 11:01 AM, Michael Allman mich...@videoamp.com
wrote:
Hello,
We're running a spark sql thriftserver that several users connect to with
beeline. One limitation we've run into is that the current working database
(set
32 matches
Mail list logo