Why do you need 1 partition when 10 partition is doing the job .. ??
Thanks
Ankit
From: vincent gromakowski
Date: Thursday, 25. April 2019 at 09:12
To: Juho Autio
Cc: user
Subject: Re: [Spark SQL]: Slow insertInto overwrite if target table has many
partitions
Which metastore are you usi
Hi Chetan,
I also agree that for this usecase parquet would not be the best option . I had
similar usecase ,
50 different tables to be download from MSSQL .
Source : MSSQL
Destination. : Apache KUDU (Since it supports very well change data capture use
cases)
We used Streamset CDC module to co
Hi Jiang
We faced similar issue so we write the file and then use sqoop to export data
to mssql.
We achieved a great time benefit with this strategy.
Sent from my iPhone
On 19. Apr 2019, at 10:47, spark receiver
mailto:spark.recei...@gmail.com>> wrote:
hi Jiang,
i was facing the very same i
Thanks for sharing.
Sent from my iPhone
On 19. Apr 2019, at 01:35, Jason Dai
mailto:jason@gmail.com>> wrote:
Hi all,
Please see below for a list of upcoming technical talks on BigDL and Analytics
Zoo (https://github.com/intel-analytics/analytics-zoo/) in the coming weeks:
* Engineers