I think this is a popular issue, but need help figuring a way around if this
issue is unresolved. I have a dataset that has more than 70 columns. To have
all the columns fit into my RDD, I am experimenting the following. (I intend
to use the InputData to parse the file and have 3 or 4 columnsets to
accommodate the full list of variables)

case class ColumnSet(C1: Double , C2: Double , C3: Double)
case class InputData(EQN: String, ts: String,Set1 :ColumnSet,Set2
:ColumnSet)

val  set1 = ColumnSet(1,2,3)
val a = InputData("a","a",set1,set1) 

returns the following

<console>:16: error: type mismatch;
 found   : ColumnSet
 required: ColumnSet
       val a = InputData("a","a",set1,set1)

Where as the same code works fine in my scala console.

Is there a work around for my problem ?

Regards
Ram



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Nested-Case-Classes-Found-and-Required-Same-tp14096.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to