Hi,

I am newbie to spark sql and i would like to know about how to read all the
columns from a file in spark sql. I have referred the programming guide
here:
http://people.apache.org/~tdas/spark-1.0-docs/sql-programming-guide.html

The example says:

val people = 
sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p
=> Person(p(0), p(1).trim.toInt))

But, instead of explicitly specifying p(0),p(1) I would like to read all
the columns from a file. It would be difficult if my source dataset has
more no of columns.

Is there any shortcut for that?

And instead of a single file, i would like to read multiple files which
shares a similar structure from a directory.

Could you please share your thoughts on this?

It would be great , if you share any documentation which has details on
these?

Thanks

Reply via email to