Re: lowerupperBound not working/spark 1.3

2015-06-14 Thread Sathish Kumaran Vairavelu
Hi I am also facing with same issue. Is it possible to view actual query passed to the database. Has anyone tried that? Also, what if we don't give upper and lower bound partition. Would we end up in data skew ? Thanks Sathish On Sun, Jun 14, 2015 at 5:02 AM Sujeevan suje...@gmail.com wrote:

Re: lowerupperBound not working/spark 1.3

2015-06-14 Thread Sujeevan
I also thought that it is an issue. After investigating it further, found out this. https://issues.apache.org/jira/browse/SPARK-6800 Here is the updated documentation of *org.apache.spark.sql.jdbc.JDBCRelation#columnPartition* method Notice that lowerBound and upperBound are just used to decide

lowerupperBound not working/spark 1.3

2015-03-22 Thread Marek Wiewiorka
Hi All - I try to use the new SQLContext API for populating DataFrame from jdbc data source. like this: val jdbcDF = sqlContext.jdbc(url = jdbc:postgresql://localhost:5430/dbname?user=userpassword=111, table = se_staging.exp_table3 ,columnName=cs_id,lowerBound=1 ,upperBound = 1,

Re: lowerupperBound not working/spark 1.3

2015-03-22 Thread Ted Yu
From javadoc of JDBCRelation#columnPartition(): * Given a partitioning schematic (a column of integral type, a number of * partitions, and upper and lower bounds on the column's value), generate In your example, 1 and 1 are for the value of cs_id column. Looks like all the values in

Re: lowerupperBound not working/spark 1.3

2015-03-22 Thread Marek Wiewiorka
...I even tried setting upper/lower bounds to the same value like 1 or 10 with the same result. cs_id is a column of the cardinality ~5*10^6 So this is not the case here. Regards, Marek 2015-03-22 20:30 GMT+01:00 Ted Yu yuzhih...@gmail.com: From javadoc of JDBCRelation#columnPartition(): *

Re: lowerupperBound not working/spark 1.3

2015-03-22 Thread Ted Yu
I went over JDBCRelation#columnPartition() but didn't find obvious clue (you can add more logging to confirm that the partitions were generated correctly). Looks like the issue may be somewhere else. Cheers On Sun, Mar 22, 2015 at 12:47 PM, Marek Wiewiorka marek.wiewio...@gmail.com wrote: