Asmath,
Why upperBound is set to 300 ? how many cores you have ?
check how data is distributed in TeraData DB table.
SELECT distinct( itm_bloon_seq_no ), count(*) as cc FROM TABLE order
by itm_bloon_seq_no desc;
Is this column "itm_bloon_seq_no" already in table or you derived at spark
Hi,
I have teradata table who has more than 2.5 billion records and data size
is around 600 GB. I am not able to pull efficiently using spark SQL and it
is been running for more than 11 hours. here is my code.
val df2 = sparkSession.read.format("jdbc")
.option("url",
a stupid question:
Is it possible to use spark on Teradata data warehouse, please? I read
some news on internet which say yes. However, I didn't find any example
about this issue
Thanks in advance.
Cheers
Gen
amount of
data out of teradata, then you can use the JdbcRDD and soon a jdbc input
source based on the new Spark SQL external data source API.
On Wed, Jan 7, 2015 at 7:14 AM, gen tang gen.tan...@gmail.com wrote:
Hi,
I have a stupid question:
Is it possible to use spark on Teradata data
is to extract small amount
of data out of teradata, then you can use the JdbcRDD and soon a jdbc input
source based on the new Spark SQL external data source API.
On Wed, Jan 7, 2015 at 7:14 AM, gen tang gen.tan...@gmail.com wrote:
Hi,
I have a stupid question:
Is it possible to use spark
Hi,
I have a stupid question:
Is it possible to use spark on Teradata data warehouse, please? I read some
news on internet which say yes. However, I didn't find any example about
this issue
Thanks in advance.
Cheers
Gen