Hi folks,
Does anyone know what is happening in this case? I tried both with MySQL and
PostgreSQL and none of them finishes schema creation without error. It seems
something has changed from 2.2. to 2.4 that broke schema generation for Hive
Metastore.
--
Sent from: http://apache-spark-user-list
Hello,
I'm running Thrift server with PostgresSQL persistence for hive metastore.
I'm using Postgres 9.6 and spark 2.4.3 in this environment.
When I start Thrift server I get lots of errors while creating the schema
and it happen everytime I reach postgres, like:
19/06/06 15:51:59 WARN Datastore
Hi, all:
I'm using spark sql thrift server under Spark1.3.1 to do hive sql query. I
started spark sql thrift server like ./sbin/start-thriftserver.sh --master
yarn-client --num-executors 12 --executor-memory 5g --driver-memory 5g, then
sent continuos hive sql to the thrift s
Hi,
I have an AWS spark EMR cluster running with spark 1.5.2, hadoop 2.6 and hive
1.0.0I brought up the spark sql thriftserver on this cluster with
spark.sql.hive.metastore version set to 1.0
When I try to connect to this thriftserver remotely using beeline packaged
with spark-1.5.2-hadoop2.6,
Hello,
there seems to be missing support for some operations in spark SQL thrift
server. To be more specific - when connected to our spark SQL instance
(1.5.1, standallone deployment) from standard jdbc sql client (squirrel SQL
and few others) via the thrift server, sql query processing seem to
:: (2,"world")
>> ::Nil).toDF.cache().registerTempTable("t")
>>
>> HiveThriftServer2.startWithContext(sqlContext)
>> }
>>
>> Again, I'm not really clear what your use case is, but it does sound like
>> the first link above is what you may want.
>>
>> -Todd
>>
>> On Wed, Aug 5, 2015 at 1:57 PM, Daniel Haviv <
>> daniel.ha...@veracity-group.com> wrote:
>>
>>> Hi,
>>> Is it possible to start the Spark SQL thrift server from with a
>>> streaming app so the streamed data could be queried as it's goes in ?
>>>
>>> Thank you.
>>> Daniel
>>>
>>
>>
>
rtWithContext(sqlContext)
> }
>
> Again, I'm not really clear what your use case is, but it does sound like
> the first link above is what you may want.
>
> -Todd
>
> On Wed, Aug 5, 2015 at 1:57 PM, Daniel Haviv <
> daniel.ha...@veracity-group.com> wrote:
>
>> Hi,
>> Is it possible to start the Spark SQL thrift server from with a streaming
>> app so the streamed data could be queried as it's goes in ?
>>
>> Thank you.
>> Daniel
>>
>
>
but it does sound like
the first link above is what you may want.
-Todd
On Wed, Aug 5, 2015 at 1:57 PM, Daniel Haviv <
daniel.ha...@veracity-group.com> wrote:
> Hi,
> Is it possible to start the Spark SQL thrift server from with a streaming
> app so the streamed data could be queried as it's goes in ?
>
> Thank you.
> Daniel
>
Hi,
Is it possible to start the Spark SQL thrift server from with a streaming app
so the streamed data could be queried as it's goes in ?
Thank you.
Daniel
+ Spark SQL Thrift Server + Cassandra
Thanks Mohammed,
I was aware of Calliope, but haven't used it since with since the
spark-cassandra-connector project got released. I was not aware of the
CalliopeServer2; cool thanks for sharing that one.
I would appreciate it if you could lmk how you deci
gt; *To:* pawan kumar
> *Cc:* Mohammed Guller; user@spark.apache.org
>
> *Subject:* Re: Tableau + Spark SQL Thrift Server + Cassandra
>
>
>
> Hi Mohammed,
>
>
>
> Not sure if you have tried this or not. You could try using the below api
> to start the thriftserv
: Todd Nist [mailto:tsind...@gmail.com]
Sent: Friday, April 3, 2015 11:39 AM
To: pawan kumar
Cc: Mohammed Guller; user@spark.apache.org
Subject: Re: Tableau + Spark SQL Thrift Server + Cassandra
Hi Mohammed,
Not sure if you have tried this or not. You could try using the below api to
start the
the tuplejump cash project,
>>>> https://github.com/tuplejump/cash.
>>>>
>>>> HTH.
>>>>
>>>> -Todd
>>>>
>>>> On Fri, Apr 3, 2015 at 11:11 AM, pawan kumar wrote:
>>>>
>>>>> Thanks mohammed.
d the
>>>> sparksSQL piece as we are migrating our data store from oracle to C* and it
>>>> would be easier to maintain all the reports rather recreating each one from
>>>> scratch.
>>>>
>>>> Thanks,
>>>> Pawan Venugopal.
&g
to maintain all the reports rather recreating each one from
>>> scratch.
>>>
>>> Thanks,
>>> Pawan Venugopal.
>>> On Apr 3, 2015 7:59 AM, "Mohammed Guller"
>>> wrote:
>>>
>>>> Hi Todd,
>>>>
>>>>
>>&
>>> Hi Todd,
>>>
>>>
>>>
>>> We are using Apache C* 2.1.3, not DSE. We got Tableau to work directly
>>> with C* using the ODBC driver, but now would like to add Spark SQL to the
>>> mix. I haven’t been able to find any documentation for h
combination work.
>>
>>
>>
>> We are using the Spark-Cassandra-Connector in our applications, but
>> haven’t been able to figure out how to get the Spark SQL Thrift Server to
>> use it and connect to C*. That is the missing piece. Once we solve that
>> pi
rk.
>
>
>
> We are using the Spark-Cassandra-Connector in our applications, but
> haven’t been able to figure out how to get the Spark SQL Thrift Server to
> use it and connect to C*. That is the missing piece. Once we solve that
> piece of the puzzle then Tableau should be able
applications, but haven’t
been able to figure out how to get the Spark SQL Thrift Server to use it and
connect to C*. That is the missing piece. Once we solve that piece of the
puzzle then Tableau should be able to see the tables in C*.
Hi Pawan,
Tableau + C* is pretty straight forward
Are you attempting to leverage the
> spark-cassandra-connector for this?
>
>
>
> On Thu, Apr 2, 2015 at 10:20 PM, Mohammed Guller
> wrote:
>
>> Hi –
>>
>>
>>
>> Is anybody using Tableau to analyze data in Cassandra through the Spark
>> SQL Thrift Server?
>>
>>
>>
>> Thanks!
>>
>>
>>
>> Mohammed
>>
>>
>>
>
>
sing Tableau to analyze data in Cassandra through the Spark
> SQL Thrift Server?
>
>
>
> Thanks!
>
>
>
> Mohammed
>
>
>
Hi -
Is anybody using Tableau to analyze data in Cassandra through the Spark SQL
Thrift Server?
Thanks!
Mohammed
Spark service,
> have you allocated enough memory and CPU when executing with spark?
>
> On Sun, Mar 22, 2015 at 3:39 AM fanooos wrote:
>
>> We have cloudera CDH 5.3 installed on one machine.
>>
>> We are trying to use spark sql thrift server to execute some an
5.3 installed on one machine.
>
> We are trying to use spark sql thrift server to execute some analysis
> queries against hive table.
>
> Without any changes in the configurations, we run the following query on
> both hive and spark sql thrift server
>
> *select * from tableName;*
We have cloudera CDH 5.3 installed on one machine.
We are trying to use spark sql thrift server to execute some analysis
queries against hive table.
Without any changes in the configurations, we run the following query on
both hive and spark sql thrift server
*select * from tableName;*
The
server. ( Here is the
problem I am talking about.
<http://apache-spark-user-list.1001560.n3.nabble.com/Connection-PHP-application-to-Spark-Sql-thrift-server-td21925.html>
)
Until I find a solution to this problem, there is a suggestion to make a
little java application that connects to spa
I have some applications developed using PHP and currently we have a problem
in connecting these applications to spark sql thrift server. ( Here is the
problem I am talking about.
<http://apache-spark-user-list.1001560.n3.nabble.com/Connection-PHP-application-to-Spark-Sql-thrift-server-td21
> From: fanooos [mailto:dev.fano...@gmail.com]
> Sent: Thursday, March 5, 2015 4:57 PM
> To: user@spark.apache.org
> Subject: Connection PHP application to Spark Sql thrift server
>
> We have two applications need to connect to Spark Sql thrift server.
>
> The first application is dev
Can you query upon Hive? Let's confirm if it's a bug of SparkSQL in your PHP
code first.
-Original Message-
From: fanooos [mailto:dev.fano...@gmail.com]
Sent: Thursday, March 5, 2015 4:57 PM
To: user@spark.apache.org
Subject: Connection PHP application to Spark Sql thrift
We have two applications need to connect to Spark Sql thrift server.
The first application is developed in java. Having spark sql thrift server
running, we following the steps in this link
<https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBC>
a
luster mode. Seems like it
>> still exists but UX of error has been cleaned up in 1.3:
>>
>> https://issues.apache.org/jira/browse/SPARK-5176
>>
>>
>>
>> -Original Message-
>> From: fanooos [mailto:dev.fano...@gmail.com]
>> Sent: Tuesday,
os [mailto:dev.fano...@gmail.com]
> Sent: Tuesday, March 3, 2015 11:15 PM
> To: user@spark.apache.org
> Subject: Connecting a PHP/Java applications to Spark SQL Thrift Server
>
> We have installed hadoop cluster with hive and spark and the spark sql
> thrift server is up and ru
cations to Spark SQL Thrift Server
We have installed hadoop cluster with hive and spark and the spark sql
thrift server is up and running without any problem.
Now we have set of applications need to use spark sql thrift server to query
some data.
Some of these applications are java applications and
We have installed hadoop cluster with hive and spark and the spark sql thrift
server is up and running without any problem.
Now we have set of applications need to use spark sql thrift server to query
some data.
Some of these applications are java applications and the others are PHP
day, March 4, 2015 5:07 AM
> *To:* Cheng, Hao
> *Subject:* Re: Spark SQL Thrift Server start exception :
> java.lang.ClassNotFoundException:
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory
>
>
>
> Hi,
>
>
>
> I am getting the same error. There is no lib folder in m
” while starting the spark shell.
From: Anusha Shamanur [mailto:anushas...@gmail.com]
Sent: Wednesday, March 4, 2015 5:07 AM
To: Cheng, Hao
Subject: Re: Spark SQL Thrift Server start exception :
java.lang.ClassNotFoundException:
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
Hi,
I am getting
, March 3, 2015 2:50 PM
To: user@spark.apache.org
Subject: Spark SQL Thrift Server start exception :
java.lang.ClassNotFoundException:
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
I have installed a hadoop cluster (version : 2.6.0), apache spark (version :
1.2.1 preBuilt for hadoop 2.4 and
I have installed a hadoop cluster (version : 2.6.0), apache spark (version :
1.2.1 preBuilt for hadoop 2.4 and later), and hive (version 1.0.0).
When I try to start the spark sql thrift server I am getting the following
exception.
Exception in thread "main" java.lang.Runtim
I would expect that killing a stage would kill the whole job. Are you not
seeing that happen?
On Mon, Dec 22, 2014 at 5:09 AM, Xiaoyu Wang wrote:
> Hello everyone!
>
> Like the title.
> I start the Spark SQL 1.2.0 thrift server. Use beeline connect to the
> server to execute SQL.
> I want to ki
Hello everyone!
Like the title.
I start the Spark SQL 1.2.0 thrift server. Use beeline connect to the server to
execute SQL.
I want to kill one SQL job running in the thrift server and not kill the thrift
server.
I set property spark.ui.killEnabled=true in spark-default.conf
But in the UI, only
l be invoked from a middle tier webapp. I am thinking to use the
> Hive JDBC driver.
>
>
>
> Thanks,
>
> Ken
>
>
>
> *From:* Michael Armbrust [mailto:mich...@databricks.com]
> *Sent:* Wednesday, August 20, 2014 9:38 AM
> *To:* Tam, Ken K
> *Cc:* user@spark.
@spark.apache.org
Subject: Re: Is Spark SQL Thrift Server part of the 1.0.2 release
No. It'll be part of 1.1.
On Wed, Aug 20, 2014 at 9:35 AM, Tam, Ken K
mailto:ken@verizon.com>> wrote:
Is Spark SQL Thrift Server part of the 1.0.2 release? If not, which release is
the target?
Thanks,
Ken
No. It'll be part of 1.1.
On Wed, Aug 20, 2014 at 9:35 AM, Tam, Ken K wrote:
> Is Spark SQL Thrift Server part of the 1.0.2 release? If not, which
> release is the target?
>
>
>
> Thanks,
>
> Ken
>
Is Spark SQL Thrift Server part of the 1.0.2 release? If not, which release is
the target?
Thanks,
Ken
Thanks Michael.
Is there a way to specify off_heap? I.e. Tachyon via the thrift server?
Thanks!
On Tue, Aug 5, 2014 at 11:06 AM, Michael Armbrust
wrote:
> We are working on an overhaul of the docs before the 1.1 release. In the
> mean time try: "CACHE TABLE ".
>
>
> On Tue, Aug 5, 2014 at 9:
We are working on an overhaul of the docs before the 1.1 release. In the
mean time try: "CACHE TABLE ".
On Tue, Aug 5, 2014 at 9:02 AM, John Omernik wrote:
> I gave things working on my cluster with the sparksql thrift server.
> (Thank you Yin Huai at Databricks!)
>
> That said, I was curious
I gave things working on my cluster with the sparksql thrift server. (Thank
you Yin Huai at Databricks!)
That said, I was curious how I can cache a table via my instance here? I
tried the shark like "create table table_cached as select * from table" and
that did not create a cached table. cacheT
47 matches
Mail list logo