Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-26 Thread Jakob Odersky
5, 2016 at 11:37 PM, Sun, Rui <rui@intel.com> wrote: >> >> Vote for option 2. >> >> Source compatibility and binary compatibility are very important from >> user’s perspective. >> >> It ‘s unfair for Java developers that they don’t have DataFrame >> a

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-26 Thread Reynold Xin
t; > > > But obviously Dataset[Row] is not internally Dataset[Row(value: Row)]. > > > > *From:* Reynold Xin [mailto:r...@databricks.com] > *Sent:* Friday, February 26, 2016 3:55 PM > *To:* Sun, Rui <rui@intel.com> > *Cc:* Koert Kuipers <ko...@tresata

RE: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-26 Thread Sun, Rui
nternally Dataset[Row(value: Row)]. From: Reynold Xin [mailto:r...@databricks.com] Sent: Friday, February 26, 2016 3:55 PM To: Sun, Rui <rui@intel.com> Cc: Koert Kuipers <ko...@tresata.com>; dev@spark.apache.org Subject: Re: [discuss] DataFrame vs Dataset in Spark 2.0 The join and joinWith

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Reynold Xin
ataFrame of Row? > > > > *From:* Reynold Xin [mailto:r...@databricks.com] > *Sent:* Friday, February 26, 2016 8:52 AM > *To:* Koert Kuipers <ko...@tresata.com> > *Cc:* dev@spark.apache.org > *Subject:* Re: [discuss] DataFrame vs Dataset in Spark 2.0 > > > >

RE: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Sun, Rui
...@databricks.com] Sent: Friday, February 26, 2016 8:52 AM To: Koert Kuipers <ko...@tresata.com> Cc: dev@spark.apache.org Subject: Re: [discuss] DataFrame vs Dataset in Spark 2.0 Yes - and that's why source compatibility is broken. Note that it is not just a "convenience" thing. Concep

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Reynold Xin
Yes - and that's why source compatibility is broken. Note that it is not just a "convenience" thing. Conceptually DataFrame is a Dataset[Row], and for some developers it is more natural to think about "DataFrame" rather than "Dataset[Row]". If we were in C++, DataFrame would've been a type alias

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Koert Kuipers
since a type alias is purely a convenience thing for the scala compiler, does option 1 mean that the concept of DataFrame ceases to exist from a java perspective, and they will have to refer to Dataset? On Thu, Feb 25, 2016 at 6:23 PM, Reynold Xin wrote: > When we first

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Reynold Xin
*Sent:* Thursday, February 25, 2016 4:23 PM > *Subject:* [discuss] DataFrame vs Dataset in Spark 2.0 > > When we first introduced Dataset in 1.6 as an experimental API, we wanted > to merge Dataset/DataFrame but couldn't because we didn't want to break the > pre-existing DataFrame API (e.g

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Michael Malak
.apache.org> Sent: Thursday, February 25, 2016 4:23 PM Subject: [discuss] DataFrame vs Dataset in Spark 2.0 When we first introduced Dataset in 1.6 as an experimental API, we wanted to merge Dataset/DataFrame but couldn't because we didn't want to break the pre-existing DataFrame API (e.

Re: [discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Chester Chen
vote for Option 1. 1) Since 2.0 is major API, we are expecting some API changes, 2) It helps long term code base maintenance with short term pain on Java side 3) Not quite sure how large the code base is using Java DataFrame APIs. On Thu, Feb 25, 2016 at 3:23 PM, Reynold Xin

[discuss] DataFrame vs Dataset in Spark 2.0

2016-02-25 Thread Reynold Xin
When we first introduced Dataset in 1.6 as an experimental API, we wanted to merge Dataset/DataFrame but couldn't because we didn't want to break the pre-existing DataFrame API (e.g. map function should return Dataset, rather than RDD). In Spark 2.0, one of the main API changes is to merge