vel of development and different
requirements (HBASE release version, Kerberos support etc)
_
From: Benjamin Kim <bbuil...@gmail.com<mailto:bbuil...@gmail.com>>
Sent: Saturday, October 8, 2016 11:26 AM
Subject: Re: Spark SQL Thriftserver with HBase
To: Mich Talebzad
M, Mich Talebzadeh <mich.talebza...@gmail.com>
>>>> wrote:
>>>>
>>>> Like any other design what is your presentation layer and end users?
>>>>
>>>> Are they SQL centric users from Tableau background or they may use
>>>> spark
m such loss, damage or destruction.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On 8 October 2016 at 20:15, Benjamin Kim <bbuil...@gmail.com> wrote:
>>>>>>>>>> Mich,
>>>>>>>>>>
>&
tter.
Without naming specifics, there are at least 4 or 5 different implementations
of HBASE sources, each at varying level of development and different
requirements (HBASE release version, Kerberos support etc)
_________
From: Benjamin Kim <bbuil...@gmail.com<mai
>,
"user@spark.apache.org<mailto:user@spark.apache.org>"
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Spark SQL Thriftserver with HBase
Instead of (or additionally to) saving results somewhere, you just start a
thriftserver that expose the Spark t
t;felixcheun...@hotmail.com<mailto:felixcheun...@hotmail.com>> wrote:
I wouldn't be too surprised Spark SQL - JDBC data source - Phoenix JDBC server
- HBASE would work better.
Without naming specifics, there are at least 4 or 5 different implementations
of HBASE sources, each at varying level
ts (HBASE release version, Kerberos support etc)
_____________
From: Benjamin Kim <bbuil...@gmail.com<mailto:bbuil...@gmail.com>>
Sent: Saturday, October 8, 2016 11:26 AM
Subject: Re: Spark SQL Thriftserver with HBase
To: Mich Talebzadeh
<mich.talebza...@gmail.com<ma
, by using JDBC everywhere, it simplifies
>>>>>>>> and unifies the code on the JDBC industry standard.
>>>>>>>>
>>>>>>>> Does this make sense?
>>>>>>>>
>>>>>>>>
the same data-center, they will connected to a located database
>>>>>> server using JDBC. Either way, by using JDBC everywhere, it simplifies
>>>>>> and unifies the code on the JDBC industry standard.
>>>>>>
>>>>>> Does this make sense?
>>>>>>
>>>>&
2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility f
ibe the use case.
>
>
>
> HTH
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
&
<bbuil...@gmail.com>
Cc: Michael Segel <msegel_had...@hotmail.com>, Jörn Franke
<jornfra...@gmail.com>, Mich Talebzadeh <mich.talebza...@gmail.com>, Felix
Cheung <felixcheun...@hotmail.com>, "user@spark.apache.org"
<user@spark.apache.org>
Subject: Re: Spark SQL
g JDBC.
>>>>> Either way, by using JDBC everywhere, it simplifies and unifies the code
>>>>> on
>>>>> the JDBC industry standard.
>>>>>
>>>>> Does this make sense?
>>>>>
>>>>> Thanks,
>>>
gt;>>
>>>> Like any other design what is your presentation layer and end users?
>>>>
>>>> Are they SQL centric users from Tableau background or they may use
>>>> spark functional programming.
>>>>
>>>> It is best to describe
om
>>>>>> <mailto:mich.talebza...@gmail.com>> wrote:
>>>>>>
>>>>>> Like any other design what is your presentation layer and end users?
>>>>>>
>>>>>> Are they SQL centric users from Tableau back
AWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
__
From: Benjamin Kim <bbuil...@gmail.com<mailto:bbuil...@gmail.com>>
Sent: Saturday, October 8, 2016 11:26 AM
Subject: Re: Spark SQL Thriftserver with HBase
To: Mich Talebzadeh
<mich.talebza...@gmail.com<mailto:mich.talebza...@gmail.com>>
Cc: <us
Actually this is what it says
Connecting to jdbc:hive2://rhes564:10055
Connected to: Spark SQL (version 2.0.0)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.spark2 by Apache Hive
So it uses Spark SQL. However, they do not seem
Hi, all
Spark STS just uses HiveContext inside and does not use MR.
Anyway, Spark STS misses some HiveServer2 functionalities such as HA (See:
https://issues.apache.org/jira/browse/SPARK-11100) and has some known
issues there.
So, you'd better off checking all the jira issues related to STS for
Hi
AFAIK STS uses Spark SQL and not Map Reduce. Is that not correct?
Best
Ayan
On Wed, Sep 14, 2016 at 8:51 AM, Mich Talebzadeh
wrote:
> STS will rely on Hive execution engine. My Hive uses Spark execution
> engine so STS will pass the SQL to Hive and let it do the
STS will rely on Hive execution engine. My Hive uses Spark execution engine
so STS will pass the SQL to Hive and let it do the work and return the
result set
which beeline
/usr/lib/spark-2.0.0-bin-hadoop2.6/bin/beeline
${SPARK_HOME}/bin/beeline -u jdbc:hive2://rhes564:10055 -n hduser -p
Mich,
It sounds like that there would be no harm in changing then. Are you saying
that using STS would still use MapReduce to run the SQL statements? What our
users are doing in our CDH 5.7.2 installation is changing the execution engine
to Spark when connected to HiveServer2 to get faster
Hi,
Spark Thrift server (STS) still uses hive thrift server. If you look at
$SPARK_HOME/sbin/start-thriftserver.sh you will see (mine is Spark 2)
function usage {
echo "Usage: ./sbin/start-thriftserver [options] [thrift server options]"
pattern="usage"
*pattern+="\|Spark assembly has been
Reece
You can do the following. Start the spark-shell. Register the UDFs in the
shell using sqlContext, then start the Thrift Server using startWithContext
from the spark shell: https://github.com/apache/spark/blob/master/sql/hive-
>From tableau, you should be able to use the Initial SQL option to support
this:
So in Tableau add the following to the “Initial SQL”
create function myfunc AS 'myclass'
using jar 'hdfs:///path/to/jar';
HTH,
Todd
On Mon, Oct 19, 2015 at 11:22 AM, Deenar Toraskar
Have you tried registering the function using the Beeline client?
Another alternative would be to create a Spark SQL UDF and launch the Spark SQL
Thrift server programmatically.
Mohammed
-Original Message-
From: ReeceRobinson [mailto:re...@therobinsons.gen.nz]
Sent: Sunday, October
Thanks Andrew. What version of HS2 is the SparkSQL thrift server using?
What would be involved in updating? Is it a simple case of increasing the
deep version in one of the project POMs?
Cheers,
~N
On Sat, May 2, 2015 at 11:38 AM, Andrew Lee alee...@hotmail.com wrote:
Hi N,
See:
27 matches
Mail list logo