Re: Spark SQL Parallelism - While reading from Oracle

2016-08-10 Thread @Sanjiv Singh
Use it You can set up all the properties (driver,partitionColumn, lowerBound, upperBound, numPartitions) you should start with the driver at first. Now you have the maximum id so you can use it for the upperBound parameter. The numPartitions now based on your table's dimensions and your actua

Spark SQL Parallelism - While reading from Oracle

2016-08-10 Thread Siva A
Hi Team, How do we increase the parallelism in Spark SQL. In Spark Core, we can re-partition or pass extra arguments part of the transformation. I am trying the below example, val df1 = sqlContext.read.format("jdbc").options(Map(...)).load val df2= df1.cache val df2.count Here count operation u