p(0) is a String. So, you need to explicitly convert it to a Long. e.g.
p(0).trim.toLong. You also need to do it for p(2). For those BigDecimals
value, you need to create BigDecimal objects from your String values.

On Tue, Mar 17, 2015 at 5:55 PM, BASAK, ANANDA <ab9...@att.com> wrote:

>  Hi All,
>
> I am very new in Spark world. Just started some test coding from last
> week. I am using spark-1.2.1-bin-hadoop2.4 and scala coding.
>
> I am having issues while using Date and decimal data types. Following is
> my code that I am simply running on scala prompt. I am trying to define a
> table and point that to my flat file containing raw data (pipe delimited
> format). Once that is done, I will run some SQL queries and put the output
> data in to another flat file with pipe delimited format.
>
>
>
> *******************************************************
>
> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
>
> import sqlContext.createSchemaRDD
>
>
>
>
>
> // Define row and table
>
> case class ROW_A(
>
>   TSTAMP:           Long,
>
>   USIDAN:             String,
>
>   SECNT:                Int,
>
>   SECT:                   String,
>
>   BLOCK_NUM:        BigDecimal,
>
>   BLOCK_DEN:        BigDecimal,
>
>   BLOCK_PCT:        BigDecimal)
>
>
>
> val TABLE_A =
> sc.textFile("/Myhome/SPARK/files/table_a_file.txt").map(_.split("|")).map(p
> => ROW_A(p(0), p(1), p(2), p(3), p(4), p(5), p(6)))
>
>
>
> TABLE_A.registerTempTable("TABLE_A")
>
>
>
> ***************************************************
>
>
>
> The second last command is giving error, like following:
>
> <console>:17: error: type mismatch;
>
> found   : String
>
> required: Long
>
>
>
> Looks like the content from my flat file are considered as String always
> and not as Date or decimal. How can I make Spark to take them as Date or
> decimal types?
>
>
>
> Regards
>
> Ananda
>

Reply via email to