https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.JdbcRDD

The arguments are sql string, lower bound, upper bound, number of
partitions.

Your call SELECT *  FROM MEMBERS LIMIT ? OFFSET  ?, 0, 100, 1
would thus be run as

SELECT *  FROM MEMBERS LIMIT 0 OFFSET  100

Naturally limit 0 will yield 0 results.

JdbcRDD is designed to be used with multiple partitions, with some kind of
numeric index.

Try something more like

SELECT * FROM MEMBERS WHERE ID <= ? AND ID < ?, 0, howeverManyRowsYouHave, 8



On Fri, May 1, 2015 at 3:56 PM, Hafiz Mujadid <hafizmujadi...@gmail.com>
wrote:

> Hi all!
> I am trying to read hana database using spark jdbc RDD
> here is my code
> def readFromHana() {
>     val conf = new SparkConf()
>     conf.setAppName("test").setMaster("local")
>     val sc = new SparkContext(conf)
>     val rdd = new JdbcRDD(sc, () => {
>       Class.forName("com.sap.db.jdbc.Driver").newInstance()
>
> DriverManager.getConnection("jdbc:sap://
> 54.69.200.113:30015/?currentschema=LIVE2",
> "mujadid", "786Xyz123")
>     },
>       "SELECT *  FROM MEMBERS LIMIT ? OFFSET  ?",
>       0, 100, 1,
>       (r: ResultSet) =>  convert(r) )
>     println(rdd.count());
>     sc.stop()
>   }
>   def convert(rs: ResultSet):String={
>           val rsmd = rs.getMetaData()
>           val numberOfColumns = rsmd.getColumnCount()
>           var i = 1
>           val row=new StringBuilder
>           while (i <= numberOfColumns) {
>             row.append( rs.getString(i)+",")
>             i += 1
>           }
>           row.toString()
>    }
>
> The resultant count is 0
>
> Any suggestion?
>
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/empty-jdbc-RDD-in-spark-tp22736.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to