Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-02 Thread Mich Talebzadeh
Thanks Sean. I guess I was being pedantic. In any case if the source table does not exist as spark.read is a collection, then it is going to fall over one way or another! On Fri, 2 Oct 2020 at 15:55, Sean Owen wrote: > It would be quite trivial. None of that affects any of the Spark

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-02 Thread Sean Owen
It would be quite trivial. None of that affects any of the Spark execution. It doesn't seem like it helps though - you are just swallowing the cause. Just let it fly? On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh wrote: > As a side question consider the following read JDBC read > > > val

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-02 Thread Mich Talebzadeh
As a side question consider the following read JDBC read val lowerBound = 1L val upperBound = 100L val numPartitions = 10 val partitionColumn = "id" val HiveDF = Try(spark.read. format("jdbc"). option("url", jdbcUrl). option("driver", HybridServerDriverName).

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Mich Talebzadeh
Many thanks Russell. That worked val *HiveDF* = Try(spark.read. format("jdbc"). option("url", jdbcUrl). option("dbtable", HiveSchema+"."+HiveTable). option("user", HybridServerUserName). option("password", HybridServerPassword). load()) match { *

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Russell Spitzer
You can't use df as the name of the return from the try and the name of the match variable in success. You also probably want to match the name of the variable in the match with the return from the match. So val df = Try(spark.read. format("jdbc"). option("url", jdbcUrl).

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Mich Talebzadeh
Many thanks SEan. Maybe I misunderstood your point? var DF = Try(spark.read. format("jdbc"). option("url", jdbcUrl). option("dbtable", HiveSchema+"."+HiveTable). option("user", HybridServerUserName). option("password", HybridServerPassword). load()) match {

Re: Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Sean Owen
You are reusing HiveDF for two vars and it ends up ambiguous. Just rename one. On Thu, Oct 1, 2020, 5:02 PM Mich Talebzadeh wrote: > Hi, > > > Spark version 2.3.3 on Google Dataproc > > > I am trying to use databricks to other databases > > >

Exception handling in Spark throws recursive value for DF needs type error

2020-10-01 Thread Mich Talebzadeh
Hi, Spark version 2.3.3 on Google Dataproc I am trying to use databricks to other databases https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html to read from Hive table on Prem using Spark in Cloud This works OK without a Try enclosure. import spark.implicits._ import

Re: Exception handling in Spark

2020-05-05 Thread Brandon Geise
Sure, just do case Failure(e) => throw e From: Mich Talebzadeh Date: Tuesday, May 5, 2020 at 6:36 PM To: Brandon Geise Cc: Todd Nist , "user @spark" Subject: Re: Exception handling in Spark Hi Brandon. In dealing with df case Failure(e) => throw new Exception

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
0 at 23:13, Brandon Geise wrote: > >> Match needs to be lower case “match” >> >> >> >> *From: *Mich Talebzadeh >> *Date: *Tuesday, May 5, 2020 at 6:13 PM >> *To: *Brandon Geise >> *Cc: *Todd Nist , "user @spark" < >> user@spark

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
lower case “match” > > > > *From: *Mich Talebzadeh > *Date: *Tuesday, May 5, 2020 at 6:13 PM > *To: *Brandon Geise > *Cc: *Todd Nist , "user @spark" > > *Subject: *Re: Exception handling in Spark > > > > > scala> import scala.util.{T

Re: Exception handling in Spark

2020-05-05 Thread Brandon Geise
Match needs to be lower case “match” From: Mich Talebzadeh Date: Tuesday, May 5, 2020 at 6:13 PM To: Brandon Geise Cc: Todd Nist , "user @spark" Subject: Re: Exception handling in Spark scala> import scala.util.{Try, Success, Failure} import scala.util.{Try, Success, Fa

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
mer:* Use it at your own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary damages a

Re: Exception handling in Spark

2020-05-05 Thread Brandon Geise
Import scala.util.Try Import scala.util.Success Import scala.util.Failure From: Mich Talebzadeh Date: Tuesday, May 5, 2020 at 6:11 PM To: Brandon Geise Cc: Todd Nist , "user @spark" Subject: Re: Exception handling in Spark This is what I get scala> val df = Try(spar

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
give this approach a try? > > > > val df = Try(spark.read.csv("")) match { > > case Success(df) => df > > case Failure(e) => throw new Exception("foo") > > } > > > > *From: *Mich Talebzadeh > *Date: *Tues

Re: Exception handling in Spark

2020-05-05 Thread Brandon Geise
dd Nist Cc: Brandon Geise , "user @spark" Subject: Re: Exception handling in Spark I am trying this approach val broadcastValue = "123456789" // I assume this will be sent as a constant for the batch // Create a DF on top of XML try { val df = spark.read.

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
Jd6zP6AcPCCdOABUrV8Pw >>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* >>> >>> >>> >>> http://talebzadehmich.wordpress.com >>> >>> >>> *Disclaimer:* Use it at your own risk. Any and all respon

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
m relying on this email's technical content is explicitly >> disclaimed. The author will in no case be liable for any monetary damages >> arising from such loss, damage or destruction. >> >> >> >> >> On Tue, 5 May 2020 at 16:41, Brandon Geise >> wrote: &

Re: Exception handling in Spark

2020-05-05 Thread Brandon Geise
Date: Tuesday, May 5, 2020 at 12:45 PM To: Brandon Geise Cc: "user @spark" Subject: Re: Exception handling in Spark Thanks Brandon! i should have remembered that. basically the code gets out with sys.exit(1) if it cannot find the file I guess there is no easy way

Re: Exception handling in Spark

2020-05-05 Thread Todd Nist
> > > > > On Tue, 5 May 2020 at 16:41, Brandon Geise wrote: > >> You could use the Hadoop API and check if the file exists. >> >> >> >> *From: *Mich Talebzadeh >> *Date: *Tuesday, May 5, 2020 at 11:25 AM >> *To: *"user @spark" &

Re: Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
he Hadoop API and check if the file exists. > > > > *From: *Mich Talebzadeh > *Date: *Tuesday, May 5, 2020 at 11:25 AM > *To: *"user @spark" > *Subject: *Exception handling in Spark > > > > Hi, > > > > As I understand exception handling in Spark

Re: Exception handling in Spark

2020-05-05 Thread Brandon Geise
You could use the Hadoop API and check if the file exists. From: Mich Talebzadeh Date: Tuesday, May 5, 2020 at 11:25 AM To: "user @spark" Subject: Exception handling in Spark Hi, As I understand exception handling in Spark only makes sense if one attempts an action

Exception handling in Spark

2020-05-05 Thread Mich Talebzadeh
Hi, As I understand exception handling in Spark only makes sense if one attempts an action as opposed to lazy transformations? Let us assume that I am reading an XML file from the HDFS directory and create a dataframe DF on it val broadcastValue = "123456789" // I assume this wi