Try passing maxID.toString, I think it wants the number as a string.

On Mon, May 29, 2017 at 3:12 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> thanks Gents but no luck!
>
> scala> val s = HiveContext.read.format("jdbc").options(
>      | Map("url" -> _ORACLEserver,
>      | "dbtable" -> "(SELECT ID, CLUSTERED, SCATTERED, RANDOMISED,
> RANDOM_STRING, SMALL_VC, PADDING FROM scratchpad.dummy)",
>      | "partitionColumn" -> "ID",
>      | "lowerBound" -> "1",
>      | "upperBound" -> maxID,
>      | "numPartitions" -> "4",
>      |  "user" -> _username,
>      | "password" -> _password)).load
> <console>:34: error: overloaded method value options with alternatives:
>   (options: java.util.Map[String,String])org.apache.spark.sql.DataFrameReader
> <and>
>   (options: scala.collection.Map[String,String])org.apache.spark.sql.
> DataFrameReader
>  cannot be applied to (scala.collection.immutable.Map[String,Comparable[_
> >: java.math.BigDecimal with String <: Comparable[_ >: java.math.BigDecimal
> with String <: java.io.Serializable] with java.io.Serializable] with
> java.io.Serializable])
>        val s = HiveContext.read.format("jdbc").options(
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 29 May 2017 at 20:12, ayan guha <guha.a...@gmail.com> wrote:
>
>> You are using maxId as a string literal. Try removing the quotes around
>> maxId
>>
>> On Tue, 30 May 2017 at 2:56 am, Jörn Franke <jornfra...@gmail.com> wrote:
>>
>>> I think you need to remove the hyphen around maxid
>>>
>>> On 29. May 2017, at 18:11, Mich Talebzadeh <mich.talebza...@gmail.com>
>>> wrote:
>>>
>>> Hi,
>>>
>>> This JDBC connection works with Oracle table with primary key ID
>>>
>>> val s = HiveContext.read.format("jdbc").options(
>>> Map("url" -> _ORACLEserver,
>>> "dbtable" -> "(SELECT ID, CLUSTERED, SCATTERED, RANDOMISED,
>>> RANDOM_STRING, SMALL_VC, PADDING FROM scratchpad.dummy)",
>>> "partitionColumn" -> "ID",
>>>
>>> *"lowerBound" -> "1","upperBound" -> "100000000",*
>>> "numPartitions" -> "4",
>>> "user" -> _username,
>>> "password" -> _password)).load
>>>
>>> Note that both lowerbound and upperbound for ID column are fixed.
>>>
>>> However, Itried to workout upperbound dynamically as follows:
>>>
>>> //
>>> // Get maxID first
>>> //
>>> scala> val maxID = HiveContext.read.format("jdbc").options(Map("url" ->
>>> _ORACLEserver,"dbtable" -> "(SELECT MAX(ID) AS maxID FROM
>>> scratchpad.dummy)",
>>>      | "user" -> _username, "password" -> _password)).load().collect.app
>>> ly(0).getDecimal(0)
>>> maxID: java.math.BigDecimal = 100000000.0000000000
>>>
>>> and this fails
>>>
>>> scala> val s = HiveContext.read.format("jdbc").options(
>>>      | Map("url" -> _ORACLEserver,
>>>      | "dbtable" -> "(SELECT ID, CLUSTERED, SCATTERED, RANDOMISED,
>>> RANDOM_STRING, SMALL_VC, PADDING FROM scratchpad.dummy)",
>>>      | "partitionColumn" -> "ID",
>>>
>>>
>>> *    | "lowerBound" -> "1",     | "upperBound" -> "maxID",*     |
>>> "numPartitions" -> "4",
>>>      | "user" -> _username,
>>>      | "password" -> _password)).load
>>> java.lang.NumberFormatException: For input string: "maxID"
>>>   at java.lang.NumberFormatException.forInputString(NumberFormatE
>>> xception.java:65)
>>>   at java.lang.Long.parseLong(Long.java:589)
>>>   at java.lang.Long.parseLong(Long.java:631)
>>>   at scala.collection.immutable.StringLike$class.toLong(StringLik
>>> e.scala:276)
>>>   at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)
>>>   at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelation
>>> Provider.createRelation(JdbcRelationProvider.scala:42)
>>>   at org.apache.spark.sql.execution.datasources.DataSource.
>>> resolveRelation(DataSource.scala:330)
>>>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.
>>> scala:152)
>>>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.
>>> scala:125)
>>>   ... 56 elided
>>>
>>>
>>> Any ideas how this can work!
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>> Best Regards,
>> Ayan Guha
>>
>
>


-- 
Donald Drake
Drake Consulting
http://www.drakeconsulting.com/
https://twitter.com/dondrake <http://www.MailLaunder.com/>
800-733-2143

Reply via email to