Tobias,

Find csv and scala files and below are steps:

1. Copy csv files in current directory.
2. Open spark-shell from this directory.
3. Run "one_scala" file which will create object-files from csv-files in
current directory.
4. Restart spark-shell
5. a. Run "two_scala" file, while running it is giving error during loading
of office_csv
    b. If we edit "two_scala" file by below contents 

-----------------------------------------------------------------------------------
case class person(id: Int, name: String, fathername: String, officeid: Int) 
case class office(id: Int, name: String, landmark: String, areacode: String) 
sc.objectFile[office]("office_obj").count
sc.objectFile[person]("person_obj").count 
--------------------------------------------------------------------------------
while running it is giving error during loading of person_csv

Regards,
Rahul

sample.gz
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n20435/sample.gz>  



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/serialization-issue-in-case-of-case-class-is-more-than-1-tp20334p20435.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to