Hi Michael,

Thanks for the clarification. My question is about the error above "error:
class $iwC needs to be abstract" and what does the RDD brings, since I can
do the DSL without the "people: people: org.apache.spark.rdd.RDD[Person]"

Thanks,


On Mon, Mar 31, 2014 at 9:13 AM, Michael Armbrust <mich...@databricks.com>wrote:

> "val people: RDD[Person] // An RDD of case class objects, from the first
> example." is just a placeholder to avoid cluttering up each example with
> the same code for creating an RDD.  The ": RDD[People]" is just there to
> let you know the expected type of the variable 'people'.  Perhaps there is
> a clearer way to indicate this.
>
> As you have realized, using the full line from the first example will
> allow you to run the rest of them.
>
>
>
> On Sun, Mar 30, 2014 at 7:31 AM, Manoj Samel <manojsamelt...@gmail.com>wrote:
>
>> Hi,
>>
>> On
>> http://people.apache.org/~pwendell/catalyst-docs/sql-programming-guide.html,
>> I am trying to run code on "Writing Language-Integrated Relational Queries"
>> ( I have 1.0.0 Snapshot ).
>>
>> I am running into error on
>>
>> val people: RDD[Person] // An RDD of case class objects, from the first
>> example.
>>
>> scala> val people: RDD[Person]
>> <console>:19: error: not found: type RDD
>>        val people: RDD[Person]
>>                    ^
>>
>> scala> val people: org.apache.spark.rdd.RDD[Person]
>> <console>:18: error: class $iwC needs to be abstract, since value people
>> is not defined
>> class $iwC extends Serializable {
>>       ^
>>
>> Any idea what the issue is ?
>>
>> Also, its not clear what does the RDD[Person] brings. I can run the DSL
>> without the case class objects RDD ...
>>
>> val people =
>> sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p
>> => Person(p(0), p(1).trim.toInt))
>>
>> val teenagers = people.where('age >= 13).where('age <= 19)
>>
>> Thanks,
>>
>>
>>
>>
>

Reply via email to