That's really odd. I copied that code directly out of the shell and it
errored out on me, several times. I wonder if something I did previously
caused some instability. I'll see if it happens again tomorrow.

On Tue, May 31, 2016, 8:37 PM Ted Yu <yuzhih...@gmail.com> wrote:

> Using spark-shell of 1.6.1 :
>
> scala> case class Test(a: Int)
> defined class Test
>
> scala> Seq(1,2).toDS.map(t => Test(t)).show
> +---+
> |  a|
> +---+
> |  1|
> |  2|
> +---+
>
> FYI
>
> On Tue, May 31, 2016 at 7:35 PM, Tim Gautier <tim.gaut...@gmail.com>
> wrote:
>
>> 1.6.1 The exception is a null pointer exception. I'll paste the whole
>> thing after I fire my cluster up again tomorrow.
>>
>> I take it by the responses that this is supposed to work?
>>
>> Anyone know when the next version is coming out? I keep running into bugs
>> with 1.6.1 that are hindering my progress.
>>
>> On Tue, May 31, 2016, 8:21 PM Saisai Shao <sai.sai.s...@gmail.com> wrote:
>>
>>> It works fine in my local test, I'm using latest master, maybe this bug
>>> is already fixed.
>>>
>>> On Wed, Jun 1, 2016 at 7:29 AM, Michael Armbrust <mich...@databricks.com
>>> > wrote:
>>>
>>>> Version of Spark? What is the exception?
>>>>
>>>> On Tue, May 31, 2016 at 4:17 PM, Tim Gautier <tim.gaut...@gmail.com>
>>>> wrote:
>>>>
>>>>> How should I go about mapping from say a Dataset[(Int,Int)] to a
>>>>> Dataset[<case class here>]?
>>>>>
>>>>> I tried to use a map, but it throws exceptions:
>>>>>
>>>>> case class Test(a: Int)
>>>>> Seq(1,2).toDS.map(t => Test(t)).show
>>>>>
>>>>> Thanks,
>>>>> Tim
>>>>>
>>>>
>>>>
>>>
>

Reply via email to