What is the query
On Fri, May 3, 2019 at 5:28 PM KhajaAsmath Mohammed
wrote:
> Hi
>
> I have followed link
> https://community.teradata.com/t5/Connectivity/Teradata-JDBC-Driver-returns-the-wrong-schema-column-nullability/m-p/77824
> to
> connect teradata from spark.
>
> I was able to print
Hi
I have followed link
https://community.teradata.com/t5/Connectivity/Teradata-JDBC-Driver-returns-the-wrong-schema-column-nullability/m-p/77824
to
connect teradata from spark.
I was able to print schema if I give table name instead of sql query.
I am getting below error if I give query(code
Hello guys,
I'm using Spark SQL with Hive thru Thrift.
I need this because I need to create a table by table mask.
Here is an example:
1. Take tables by mask, like SHOW TABLES IN db 'table__*'
2. Create query like:
CREATE TABLE total_data AS
SELECT * FROM table__1
UNION ALL
SELECT * FROM table__2
Hi,
I would to know the steps to connect SPARK SQL from spring framework
(Web-UI).
also how to run and deploy the web application?
select 10 sample rows for columns id, ctime from each (MySQL and spark)
tables and post the output please.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Hi,
I came across this strange behavior of Apache Spark 1.6.1:
when I was reading mysql table into spark dataframe ,a column of data type
float got mapped into double.
dataframe schema:
root
|-- id: long (nullable = true)
|-- ctime: double (nullable = true)
|-- atime: double (nullable =
Enter username for jdbc:hive2://192.168.145.20:: root
>>> Enter password for jdbc:hive2://192.168.145.20:9999: impetus
>>> *beeline > *
>>>
>>> It is not giving query result on hive table through Spark JDBC, but it
>>> is working with spark HiveSQLCon
econf
>>>> hive.server2.thrift.bind.host=myhost --hiveconf
>>>> hive.server2.thrift.port=*
>>>>
>>>> and also able to connect through beeline
>>>>
>>>> *beeline>* !connect jdbc:hive2://192.168.145.20:
>>>> Ent
RK_HOME*
>>>>> *./sbin/start-thriftserver.sh --master spark://myhost:7077 --hiveconf
>>>>> hive.server2.thrift.bind.host=myhost --hiveconf
>>>>> hive.server2.thrift.port=*
>>>>>
>>>>> and also able to connect thr
gt;> Enter username for jdbc:hive2://192.168.145.20:: root
>> Enter password for jdbc:hive2://192.168.145.20:: impetus
>> *beeline > *
>>
>> It is not giving query result on hive table through Spark JDBC, but it is
>> working with spark HiveSQLContext. See
; working with spark HiveSQLContext. See complete scenario explain below.
>
> Help me understand the issue why Spark SQL JDBC is not giving result ?
>
> Below are version details.
>
> *Hive Version : 1.2.1*
> *Hadoop Version : 2.6.0*
> *Spark version: 1.3.1*
orking with spark HiveSQLContext. See complete scenario explain below.
Help me understand the issue why Spark SQL JDBC is not giving result ?
Below are version details.
*Hive Version : 1.2.1*
*Hadoop Version : 2.6.0*
*Spark version: 1.3.1*
Let me know if need other details.
*Created Hive
:22 PM:
> From: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> To: Richard Hillegas/San Francisco/IBM@IBMUS
> Cc: "u...@spark.incubator.apache.org"
> <u...@spark.incubator.apache.org>, "user@spark.apache.org"
> <user@spark.apache.org>
> Date: 11/0
ard Hillegas/San Francisco/IBM@IBMUS
> To: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> Cc: "user@spark.apache.org" <user@spark.apache.org>,
> "u...@spark.incubator.apache.org" <u...@spark.incubator.apache.org>
> Date: 11/05/2015 09:17 AM
>
> "u...@spark.incubator.apache.org" <u...@spark.incubator.apache.org>
> Date: 11/05/2015 05:51 AM
> Subject: Spark sql jdbc fails for Oracle NUMBER type columns
>
> Hi,
> Is this issue fixed in 1.5.1 version?
> Regards,
> Rajesh
rom: Richard Hillegas/San Francisco/IBM@IBMUS
> > To: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> > Cc: "user@spark.apache.org" <user@spark.apache.org>,
> > "u...@spark.incubator.apache.org" <u...@spark.incubator.apache.org>
> > Date: 1
Hi,
Is this issue fixed in 1.5.1 version?
Regards,
Rajesh
I suppose every RDBMS has a jdbc driver to connct to. I know Oracle, MySQL,
SQL Server, Terdata, Netezza have.
On Thu, Jul 9, 2015 at 10:09 PM, Niranda Perera niranda.per...@gmail.com
wrote:
Hi,
I'm planning to use Spark SQL JDBC datasource provider in various RDBMS
databases.
what
Can some one help me here? Please
On Sat, Jun 20, 2015 at 9:54 AM Sathish Kumaran Vairavelu
vsathishkuma...@gmail.com wrote:
Hi,
In Spark SQL JDBC data source there is an option to specify upper/lower
bound and num of partitions. How Spark handles data distribution, if we do
not give
Hi,
In Spark SQL JDBC data source there is an option to specify upper/lower
bound and num of partitions. How Spark handles data distribution, if we do
not give the upper/lower/num of parititons ? Will all data from the
external data source skewed up in one executor?
In many situations, we do
Hello Everyone,
I pulled 2 different tables from the JDBC source and then joined them using
the cust_id *decimal* column. A simple join like as below. This simple join
works perfectly in the database but not in Spark SQL. I am importing 2
tables as a data frame/registertemptable and firing sql on
Sounds like SPARK-5456 https://issues.apache.org/jira/browse/SPARK-5456.
Which is fixed in Spark 1.4.
On Sun, Jun 14, 2015 at 11:57 AM, Sathish Kumaran Vairavelu
vsathishkuma...@gmail.com wrote:
Hello Everyone,
I pulled 2 different tables from the JDBC source and then joined them
using the
Thank you.. it works in Spark 1.4.
On Sun, Jun 14, 2015 at 3:51 PM Michael Armbrust mich...@databricks.com
wrote:
Sounds like SPARK-5456 https://issues.apache.org/jira/browse/SPARK-5456.
Which is fixed in Spark 1.4.
On Sun, Jun 14, 2015 at 11:57 AM, Sathish Kumaran Vairavelu
I am considering DSE, which has integrated Spark SQL
Thrift/JDBC server with Cassandra.
Mohammed
From: Deenar Toraskar [mailto:deenar.toras...@gmail.com]
Sent: Thursday, June 4, 2015 7:42 AM
To: Mohammed Guller
Cc: user@spark.apache.org
Subject: Re: Anybody using Spark SQL JDBC server with DSE
Mohammed
Have you tried registering your Cassandra tables in Hive/Spark SQL using
the data frames API. These should be then available to query via the Spark
SQL/Thrift JDBC Server.
Deenar
On 1 June 2015 at 19:33, Mohammed Guller moham...@glassbeam.com wrote:
Nobody using Spark SQL JDBC
Nobody using Spark SQL JDBC/Thrift server with DSE Cassandra?
Mohammed
From: Mohammed Guller [mailto:moham...@glassbeam.com]
Sent: Friday, May 29, 2015 11:49 AM
To: user@spark.apache.org
Subject: Anybody using Spark SQL JDBC server with DSE Cassandra?
Hi -
We have successfully integrated Spark
the Spark SQL JDBC server.
However, I have been unable to find a driver that would allow the Spark SQL
Thrift/JDBC server to connect with Cassandra. DataStax provides a closed-source
driver that comes only with the DSE version of Cassandra.
I would like to find out how many people are using the Spark
To: User
Subject: What's the advantage features of Spark SQL(JDBC) Hi All,
Comparing direct access via JDBC, what's the advantage features of Spark
SQL(JDBC) to access external data source? Any tips are welcome! Thanks.
Regards, Yi
Spark SQL just take the JDBC as a new data source, the same as we need to
support loading data from a .csv or .json.
From: Yi Zhang [mailto:zhangy...@yahoo.com.INVALID]
Sent: Friday, May 15, 2015 2:30 PM
To: User
Subject: What's the advantage features of Spark SQL(JDBC)
Hi All,
Comparing
2:51 PM
To: Cheng, Hao; User
Subject: Re: What's the advantage features of Spark SQL(JDBC) @Hao, As you
said, there is no advantage feature for JDBC, it just provides unified api to
support different data sources. Is it right? On Friday, May 15, 2015 2:46
PM, Cheng, Hao hao.ch
Yes.
From: Yi Zhang [mailto:zhangy...@yahoo.com]
Sent: Friday, May 15, 2015 2:51 PM
To: Cheng, Hao; User
Subject: Re: What's the advantage features of Spark SQL(JDBC)
@Hao,
As you said, there is no advantage feature for JDBC, it just provides unified
api to support different data sources
Hi All,
Comparing direct access via JDBC, what's the advantage features of Spark
SQL(JDBC) to access external data source?
Any tips are welcome! Thanks.
Regards,Yi
can confirm on this though.
*From:* Cheng Lian [mailto:lian.cs@gmail.com]
*Sent:* Tuesday, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather than
on the forum can confirm on this though.
*From:* Cheng Lian [mailto:lian.cs@gmail.com]
*Sent:* Tuesday, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext
, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather than
HiveContext. To interact with Hive, HiveContext *must* be used.
Please refer to this page
http
SQL experts on the forum can confirm on this though.
From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Tuesday, December 9, 2014 6:42 AM
To: Anas Mosaad
Cc: Judy Nash; user@spark.apache.org
Subject: Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather
, 2014 11:01 AM
*To:* user@spark.apache.org
*Subject:* Spark-SQL JDBC driver
Hello Everyone,
I'm brand new to spark and was wondering if there's a JDBC driver to
access spark-SQL directly. I'm running spark in standalone mode and don't
have hadoop in this environment.
--
*Best
Essentially, the Spark SQL JDBC Thrift server is just a Spark port of
HiveServer2. You don't need to run Hive, but you do need a working
Metastore.
On 12/9/14 3:59 PM, Anas Mosaad wrote:
Thanks Judy, this is exactly what I'm looking for. However, and plz
forgive me if it's a dump question
to read the RDD using SQL from
outside spark-shell (i.e. like any other relational database)
On Tue, Dec 9, 2014 at 11:05 AM, Cheng Lian lian.cs@gmail.com wrote:
Essentially, the Spark SQL JDBC Thrift server is just a Spark port of
HiveServer2. You don't need to run Hive, but you do need
:
Essentially, the Spark SQL JDBC Thrift server is just a Spark port
of HiveServer2. You don't need to run Hive, but you do need a
working Metastore.
On 12/9/14 3:59 PM, Anas Mosaad wrote:
Thanks Judy, this is exactly what I'm looking for. However, and
plz forgive me if it's
://localhost:1 *
Kindly advice, what am I missing? I want to read the RDD using SQL from
outside spark-shell (i.e. like any other relational database)
On Tue, Dec 9, 2014 at 11:05 AM, Cheng Lian lian.cs@gmail.com wrote:
Essentially, the Spark SQL JDBC Thrift server is just a Spark
missing? I want to read the RDD using
SQL from outside spark-shell (i.e. like any other relational
database)
On Tue, Dec 9, 2014 at 11:05 AM, Cheng Lian
lian.cs@gmail.com mailto:lian.cs@gmail.com wrote:
Essentially, the Spark SQL JDBC Thrift server is just a Spark
Hello Everyone,
I'm brand new to spark and was wondering if there's a JDBC driver to access
spark-SQL directly. I'm running spark in standalone mode and don't have
hadoop in this environment.
--
*Best Regards/أطيب المنى,*
*Anas Mosaad*
Subject: Spark-SQL JDBC driver
Hello Everyone,
I'm brand new to spark and was wondering if there's a JDBC driver to access
spark-SQL directly. I'm running spark in standalone mode and don't have hadoop
in this environment.
--
Best Regards/أطيب المنى,
Anas Mosaad
Even when I comment out those 3 lines, I still get the same error. Did
someone solve this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-tp11369p13992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
:
Even when I comment out those 3 lines, I still get the same error. Did
someone solve this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-tp11369p13992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
...@spark.incubator.apache.org
Subject: Re: Spark SQL JDBC
When you re-ran sbt did you clear out the packages first and ensure that the
datanucleus jars were generated within lib_managed? I remembered having to do
that when I was working testing out different configs.
On Thu, Sep 11, 2014 at 10:50 AM
Oh, thanks for reporting this. This should be a bug since SPARK_HIVE was
deprecated, we shouldn’t rely on it any more.
On Wed, Aug 13, 2014 at 1:23 PM, ZHENG, Xu-dong dong...@gmail.com wrote:
Just find this is because below lines in make_distribution.sh doesn't work:
if [ $SPARK_HIVE ==
Yin helped me with that, and I appreciate the onlist followup. A few
questions: Why is this the case? I guess, does building it with
thriftserver add much more time/size to the final build? It seems that
unless documented well, people will miss that and this situation would
occur, why would we
Hive pulls in a ton of dependencies that we were afraid would break
existing spark applications. For this reason all hive submodules are
optional.
On Tue, Aug 12, 2014 at 7:43 AM, John Omernik j...@omernik.com wrote:
Yin helped me with that, and I appreciate the onlist followup. A few
Hi Cheng,
I also meet some issues when I try to start ThriftServer based a build from
master branch (I could successfully run it from the branch-1.0-jdbc
branch). Below is my build command:
./make-distribution.sh --skip-java-test -Phadoop-2.4 -Phive -Pyarn
-Dyarn.version=2.4.0
Just find this is because below lines in make_distribution.sh doesn't work:
if [ $SPARK_HIVE == true ]; then
cp $FWDIR/lib_managed/jars/datanucleus*.jar $DISTDIR/lib/
fi
There is no definition of $SPARK_HIVE in make_distribution.sh. I should set
it explicitly.
On Wed, Aug 13, 2014 at 1:10
Hi John, the JDBC Thrift server resides in its own build profile and need
to be enabled explicitly by ./sbt/sbt -Phive-thriftserver assembly.
On Tue, Aug 5, 2014 at 4:54 AM, John Omernik j...@omernik.com wrote:
I am using spark-1.1.0-SNAPSHOT right now and trying to get familiar with
the
I am using spark-1.1.0-SNAPSHOT right now and trying to get familiar with
the JDBC thrift server. I have everything compiled correctly, I can access
data in spark-shell on yarn from my hive installation. Cached tables, etc
all work.
When I execute ./sbin/start-thriftserver.sh
I get the error
now. It is easy to do this and took a just a few hours and it works for our
use case.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-Connectivity-tp6511p10986.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
.n3.nabble.com/Spark-SQL-JDBC-Connectivity-tp6511p10986.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
for the clarification.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-Connectivity-tp6511p7264.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
[Venkat] Are you saying - pull in the SharkServer2 code in my standalone
spark application (as a part of the standalone application process), pass
in
the spark context of the standalone app to SharkServer2 Sparkcontext at
startup and viola we get a SQL/JDBC interfaces for the RDDs of the
.nabble.com/Spark-SQL-JDBC-Connectivity-tp6511.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
On Wed, May 28, 2014 at 11:39 PM, Venkat Subramanian vsubr...@gmail.comwrote:
We are planning to use the latest Spark SQL on RDDs. If a third party
application wants to connect to Spark via JDBC, does Spark SQL have
support?
(We want to avoid going though Shark/Hive JDBC layer as we need good
case?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-Connectivity-tp6511p6543.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
61 matches
Mail list logo