Hi Team,

How do we increase the parallelism in Spark SQL.
In Spark Core, we can re-partition or pass extra arguments part of the
transformation.

I am trying the below example,

val df1 = sqlContext.read.format("jdbc").options(Map(...)).load
val df2= df1.cache
val df2.count

Here count operation using only one task. I couldn't increase the
parallelism.
Thanks in advance

Thanks
Siva

Reply via email to