Thanks for interesting ideas! Looks like spark directly writing to relational 
database is not as straight forward as I expected.

Sent from my iPhone

> On Apr 19, 2019, at 06:58, Khare, Ankit <ankit.kh...@eon.com> wrote:
> 
> Hi Jiang
> 
> We faced similar issue so we write the file and then use sqoop to export data 
> to mssql. 
> 
> We achieved a great time benefit with this strategy.
> 
> Sent from my iPhone
> 
> On 19. Apr 2019, at 10:47, spark receiver <spark.recei...@gmail.com> wrote:
> 
>> hi Jiang,
>> 
>> i was facing the very same issue ,the solution is write to file and using 
>> oracle external table to do the insert.
>> 
>> hope this could help.
>> 
>> Dalin
>> 
>>> On Thu, Apr 18, 2019 at 11:43 AM Jörn Franke <jornfra...@gmail.com> wrote:
>>> What is the size of the data? How much time does it need on HDFS and how 
>>> much on Oracle? How many partitions do you have on Oracle side?
>>> 
>>> Am 06.04.2019 um 16:59 schrieb Lian Jiang <jiangok2...@gmail.com>:
>>> 
>>>> Hi,
>>>> 
>>>> My spark job writes into oracle db using:
>>>> df.coalesce(10).write.format("jdbc").option("url", url)
>>>>   .option("driver", driver).option("user", user)
>>>>   .option("batchsize", 2000)
>>>>   .option("password", password).option("dbtable", 
>>>> tableName).mode("append").save()
>>>> It is much slow than writting into HDFS. The data to write is small.
>>>> Is this expected? Thanks for any clue.
>>>> 

Reply via email to