Thanks Sean. I guess I was being pedantic. In any case if the source table
does not exist as spark.read is a collection, then it is going to fall over
one way or another!
On Fri, 2 Oct 2020 at 15:55, Sean Owen wrote:
> It would be quite trivial. None of that affects any of the Spark
It would be quite trivial. None of that affects any of the Spark execution.
It doesn't seem like it helps though - you are just swallowing the cause.
Just let it fly?
On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh
wrote:
> As a side question consider the following read JDBC read
>
>
> val
As a side question consider the following read JDBC read
val lowerBound = 1L
val upperBound = 100L
val numPartitions = 10
val partitionColumn = "id"
val HiveDF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("driver", HybridServerDriverName).
Many thanks Russell. That worked
val *HiveDF* = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("dbtable", HiveSchema+"."+HiveTable).
option("user", HybridServerUserName).
option("password", HybridServerPassword).
load()) match {
*
You can't use df as the name of the return from the try and the name of the
match variable in success. You also probably want to match the name of the
variable in the match with the return from the match.
So
val df = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
Many thanks SEan.
Maybe I misunderstood your point?
var DF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("dbtable", HiveSchema+"."+HiveTable).
option("user", HybridServerUserName).
option("password", HybridServerPassword).
load()) match {
You are reusing HiveDF for two vars and it ends up ambiguous. Just rename
one.
On Thu, Oct 1, 2020, 5:02 PM Mich Talebzadeh
wrote:
> Hi,
>
>
> Spark version 2.3.3 on Google Dataproc
>
>
> I am trying to use databricks to other databases
>
>
>
Hi,
Spark version 2.3.3 on Google Dataproc
I am trying to use databricks to other databases
https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
to read from Hive table on Prem using Spark in Cloud
This works OK without a Try enclosure.
import spark.implicits._
import
Sure, just do case Failure(e) => throw e
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 6:36 PM
To: Brandon Geise
Cc: Todd Nist , "user @spark"
Subject: Re: Exception handling in Spark
Hi Brandon.
In dealing with
df case Failure(e) => throw new Exception
0 at 23:13, Brandon Geise wrote:
>
>> Match needs to be lower case “match”
>>
>>
>>
>> *From: *Mich Talebzadeh
>> *Date: *Tuesday, May 5, 2020 at 6:13 PM
>> *To: *Brandon Geise
>> *Cc: *Todd Nist , "user @spark" <
>> user@spark
lower case “match”
>
>
>
> *From: *Mich Talebzadeh
> *Date: *Tuesday, May 5, 2020 at 6:13 PM
> *To: *Brandon Geise
> *Cc: *Todd Nist , "user @spark" >
> *Subject: *Re: Exception handling in Spark
>
>
>
>
> scala> import scala.util.{T
Match needs to be lower case “match”
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 6:13 PM
To: Brandon Geise
Cc: Todd Nist , "user @spark"
Subject: Re: Exception handling in Spark
scala> import scala.util.{Try, Success, Failure}
import scala.util.{Try, Success, Fa
mer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages a
Import scala.util.Try
Import scala.util.Success
Import scala.util.Failure
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 6:11 PM
To: Brandon Geise
Cc: Todd Nist , "user @spark"
Subject: Re: Exception handling in Spark
This is what I get
scala> val df =
Try(spar
give this approach a try?
>
>
>
> val df = Try(spark.read.csv("")) match {
>
> case Success(df) => df
>
> case Failure(e) => throw new Exception("foo")
>
> }
>
>
>
> *From: *Mich Talebzadeh
> *Date: *Tues
dd Nist
Cc: Brandon Geise , "user @spark"
Subject: Re: Exception handling in Spark
I am trying this approach
val broadcastValue = "123456789" // I assume this will be sent as a constant
for the batch
// Create a DF on top of XML
try {
val df = spark.read.
Jd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all respon
m relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 5 May 2020 at 16:41, Brandon Geise
>> wrote:
&
Date: Tuesday, May 5, 2020 at 12:45 PM
To: Brandon Geise
Cc: "user @spark"
Subject: Re: Exception handling in Spark
Thanks Brandon!
i should have remembered that.
basically the code gets out with sys.exit(1) if it cannot find the file
I guess there is no easy way
>
>
>
>
> On Tue, 5 May 2020 at 16:41, Brandon Geise wrote:
>
>> You could use the Hadoop API and check if the file exists.
>>
>>
>>
>> *From: *Mich Talebzadeh
>> *Date: *Tuesday, May 5, 2020 at 11:25 AM
>> *To: *"user @spark"
&
he Hadoop API and check if the file exists.
>
>
>
> *From: *Mich Talebzadeh
> *Date: *Tuesday, May 5, 2020 at 11:25 AM
> *To: *"user @spark"
> *Subject: *Exception handling in Spark
>
>
>
> Hi,
>
>
>
> As I understand exception handling in Spark
You could use the Hadoop API and check if the file exists.
From: Mich Talebzadeh
Date: Tuesday, May 5, 2020 at 11:25 AM
To: "user @spark"
Subject: Exception handling in Spark
Hi,
As I understand exception handling in Spark only makes sense if one attempts an
action
Hi,
As I understand exception handling in Spark only makes sense if one
attempts an action as opposed to lazy transformations?
Let us assume that I am reading an XML file from the HDFS directory and
create a dataframe DF on it
val broadcastValue = "123456789" // I assume this wi
23 matches
Mail list logo