RE: Spark JDBC connection - data writing success or failure cases

2016-02-20 Thread Mich Talebzadeh
o:divya.htco...@gmail.com] Sent: 21 February 2016 00:09 To: Mich Talebzadeh <m...@peridale.co.uk> Cc: user @spark <user@spark.apache.org>; Russell Jurney <russell.jur...@gmail.com>; Jörn Franke <jornfra...@gmail.com> Subject: RE: Spark JDBC connection - data writing success or fa

RE: Spark JDBC connection - data writing success or failure cases

2016-02-19 Thread Mich Talebzadeh
t;jornfra...@gmail.com> Cc: Divya Gehlot <divya.htco...@gmail.com>; user @spark <user@spark.apache.org> Subject: Re: Spark JDBC connection - data writing success or failure cases Oracle is a perfectly reasonable endpoint for publishing data processed in Spark. I've got to assum

Re: Spark JDBC connection - data writing success or failure cases

2016-02-19 Thread Russell Jurney
Oracle is a perfectly reasonable endpoint for publishing data processed in Spark. I've got to assume he's using it that way and not as a stand in for HDFS? On Friday, February 19, 2016, Jörn Franke wrote: > Generally oracle db should not be used as a storage layer for

Re: Spark JDBC connection - data writing success or failure cases

2016-02-19 Thread Jörn Franke
Generally oracle db should not be used as a storage layer for spark due to performance reasons. You should consider HDFS. This will help you also with fault - tolerance. > On 19 Feb 2016, at 03:35, Divya Gehlot wrote: > > Hi, > I am a Spark job which connects to

RE: Spark JDBC connection - data writing success or failure cases

2016-02-18 Thread Mich Talebzadeh
Sorry where is the source of data. Are you writing to Oracle table or reading from? In general JDBC messages will you about the connection failure halfway or any other message received say from Oracle via JDBC. What batch size are you using for this transaction? HTH Dr Mich