[GitHub] spark pull request #13545: [SPARK-15807][SQL] Support varargs for distinct/d...

2016-06-07 Thread rxin
Github user rxin commented on a diff in the pull request:

https://github.com/apache/spark/pull/13545#discussion_r66181659
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2262,6 +2275,19 @@ class Dataset[T] private[sql](
   def distinct(): Dataset[T] = dropDuplicates()
 
   /**
+   * Returns a new [[Dataset]] that contains only the unique rows from 
this [[Dataset]], considering
+   * only the subset of columns. This is an alias for 
`dropDuplicates(cols)`.
+   *
+   * Note that, equality checking is performed directly on the encoded 
representation of the data
+   * and thus is not affected by a custom `equals` function defined on `T`.
+   *
+   * @group typedrel
+   * @since 2.0.0
+   */
+  @scala.annotation.varargs
+  def distinct(cols: String*): Dataset[T] = dropDuplicates(cols)
--- End diff --

let's not have this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13545: [SPARK-15807][SQL] Support varargs for distinct/d...

2016-06-07 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13545#discussion_r66156310
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2262,6 +2275,19 @@ class Dataset[T] private[sql](
   def distinct(): Dataset[T] = dropDuplicates()
 
   /**
+   * Returns a new [[Dataset]] that contains only the unique rows from 
this [[Dataset]], considering
+   * only the subset of columns. This is an alias for 
`dropDuplicates(cols)`.
+   *
+   * Note that, equality checking is performed directly on the encoded 
representation of the data
+   * and thus is not affected by a custom `equals` function defined on `T`.
+   *
+   * @group typedrel
+   * @since 2.0.0
+   */
+  @scala.annotation.varargs
+  def distinct(cols: String*): Dataset[T] = dropDuplicates(cols)
--- End diff --

In addition, `distinct` of `dplyr` R packages works in the same manner.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13545: [SPARK-15807][SQL] Support varargs for distinct/d...

2016-06-07 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13545#discussion_r66152341
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2262,6 +2275,19 @@ class Dataset[T] private[sql](
   def distinct(): Dataset[T] = dropDuplicates()
 
   /**
+   * Returns a new [[Dataset]] that contains only the unique rows from 
this [[Dataset]], considering
+   * only the subset of columns. This is an alias for 
`dropDuplicates(cols)`.
+   *
+   * Note that, equality checking is performed directly on the encoded 
representation of the data
+   * and thus is not affected by a custom `equals` function defined on `T`.
+   *
+   * @group typedrel
+   * @since 2.0.0
+   */
+  @scala.annotation.varargs
+  def distinct(cols: String*): Dataset[T] = dropDuplicates(cols)
--- End diff --

Thank you always for fast feedbacks, @rxin . And for nice lunch. :)

Yes, right. For this, maybe it's not needed because `distinct` is usually 
used with `select`. 
Also, we can use `dropDuplicates` since it's just an alias of 
`dropDuplicates`.

I think `distinct` is a function name which is more consistent with SQL. If 
we have this, we can do this, too.
```
ds.select("_1", "_2", "_3").distinct("_1").orderBy("_1", "_2").show()
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13545: [SPARK-15807][SQL] Support varargs for distinct/d...

2016-06-07 Thread rxin
Github user rxin commented on a diff in the pull request:

https://github.com/apache/spark/pull/13545#discussion_r66135714
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2262,6 +2275,19 @@ class Dataset[T] private[sql](
   def distinct(): Dataset[T] = dropDuplicates()
 
   /**
+   * Returns a new [[Dataset]] that contains only the unique rows from 
this [[Dataset]], considering
+   * only the subset of columns. This is an alias for 
`dropDuplicates(cols)`.
+   *
+   * Note that, equality checking is performed directly on the encoded 
representation of the data
+   * and thus is not affected by a custom `equals` function defined on `T`.
+   *
+   * @group typedrel
+   * @since 2.0.0
+   */
+  @scala.annotation.varargs
+  def distinct(cols: String*): Dataset[T] = dropDuplicates(cols)
--- End diff --

why do we want this?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13545: [SPARK-15807][SQL] Support varargs for distinct/d...

2016-06-07 Thread dongjoon-hyun
GitHub user dongjoon-hyun opened a pull request:

https://github.com/apache/spark/pull/13545

[SPARK-15807][SQL] Support varargs for distinct/dropDuplicates in 
Dataset/DataFrame

## What changes were proposed in this pull request?
This PR adds `varargs`-types `distinct/dropDuplicates` functions in 
`Dataset/DataFrame`. Currently, `distinct` does not get arguments, and 
`dropDuplicates` supports only `Seq` or `Array`.

**Before**
```scala
scala> val ds = spark.createDataFrame(Seq(("a", 1), ("b", 2), ("a", 2)))
ds: org.apache.spark.sql.DataFrame = [_1: string, _2: int]

scala> ds.dropDuplicates(Seq("_1", "_2"))
res0: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [_1: string, 
_2: int]

scala> ds.dropDuplicates("_1", "_2")
:26: error: overloaded method value dropDuplicates with 
alternatives:
  (colNames: 
Array[String])org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] 
  (colNames: 
Seq[String])org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] 
  ()org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]
 cannot be applied to (String, String)
   ds.dropDuplicates("_1", "_2")
  ^

scala> ds.distinct("_1", "_2")
:26: error: too many arguments for method distinct: 
()org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]
   ds.distinct("_1", "_2")
```

**After**
```scala
scala> val ds = spark.createDataFrame(Seq(("a", 1), ("b", 2), ("a", 2)))
ds: org.apache.spark.sql.DataFrame = [_1: string, _2: int]

scala> ds.dropDuplicates("_1", "_2")
res0: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [_1: string, 
_2: int]

scala> ds.distinct("_1", "_2")
res1: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [_1: string, 
_2: int]
```

## How was this patch tested?

Pass the Jenkins tests with new testcases.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dongjoon-hyun/spark SPARK-15807

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/13545.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #13545


commit 33f446f4bb04e2ea0014c385b6f0d1b290db5a90
Author: Dongjoon Hyun 
Date:   2016-06-07T18:34:24Z

[SPARK-15807][SQL] Support varargs for distinct/dropDuplicates




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org