Use it
You can set up all the properties (driver,partitionColumn, lowerBound,
upperBound, numPartitions) you should start with the driver at first.
Now you have the maximum id so you can use it for the upperBound parameter.
The numPartitions now based on your table's dimensions and your actua
Hi Team,
How do we increase the parallelism in Spark SQL.
In Spark Core, we can re-partition or pass extra arguments part of the
transformation.
I am trying the below example,
val df1 = sqlContext.read.format("jdbc").options(Map(...)).load
val df2= df1.cache
val df2.count
Here count operation u