Thanks Sean. I guess I was being pedantic. In any case if the source table
does not exist as spark.read is a collection, then it is going to fall over
one way or another!
On Fri, 2 Oct 2020 at 15:55, Sean Owen wrote:
> It would be quite trivial. None of that affects any of the Spark
It would be quite trivial. None of that affects any of the Spark execution.
It doesn't seem like it helps though - you are just swallowing the cause.
Just let it fly?
On Fri, Oct 2, 2020 at 9:34 AM Mich Talebzadeh
wrote:
> As a side question consider the following read JDBC read
>
>
> val
As a side question consider the following read JDBC read
val lowerBound = 1L
val upperBound = 100L
val numPartitions = 10
val partitionColumn = "id"
val HiveDF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("driver", HybridServerDriverName).
Many thanks Russell. That worked
val *HiveDF* = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("dbtable", HiveSchema+"."+HiveTable).
option("user", HybridServerUserName).
option("password", HybridServerPassword).
load()) match {
*
You can't use df as the name of the return from the try and the name of the
match variable in success. You also probably want to match the name of the
variable in the match with the return from the match.
So
val df = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
Many thanks SEan.
Maybe I misunderstood your point?
var DF = Try(spark.read.
format("jdbc").
option("url", jdbcUrl).
option("dbtable", HiveSchema+"."+HiveTable).
option("user", HybridServerUserName).
option("password", HybridServerPassword).
load()) match {
You are reusing HiveDF for two vars and it ends up ambiguous. Just rename
one.
On Thu, Oct 1, 2020, 5:02 PM Mich Talebzadeh
wrote:
> Hi,
>
>
> Spark version 2.3.3 on Google Dataproc
>
>
> I am trying to use databricks to other databases
>
>
>
Hi,
Spark version 2.3.3 on Google Dataproc
I am trying to use databricks to other databases
https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
to read from Hive table on Prem using Spark in Cloud
This works OK without a Try enclosure.
import spark.implicits._
import