[GitHub] spark pull request #21759: sfas

2018-07-13 Thread marymwu
Github user marymwu closed the pull request at:

https://github.com/apache/spark/pull/21759


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21759: sfas

2018-07-13 Thread marymwu
GitHub user marymwu opened a pull request:

https://github.com/apache/spark/pull/21759

sfas

## What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/marymwu/spark master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/21759.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #21759


commit dcf36ad54598118408c1425e81aa6552f42328c8
Author: Dongjoon Hyun 
Date:   2016-05-03T13:02:04Z

[SPARK-15057][GRAPHX] Remove stale TODO comment for making `enum` in 
GraphGenerators

This PR removes a stale TODO comment in `GraphGenerators.scala`

Just comment removed.

Author: Dongjoon Hyun 

Closes #12839 from dongjoon-hyun/SPARK-15057.

(cherry picked from commit 46965cd014fd4ba68bdec15156ec9bcc27d9b217)
Signed-off-by: Reynold Xin 

commit 1dc30f189ac30f070068ca5f60b7b4c85f2adc9e
Author: Bryan Cutler 
Date:   2016-05-19T02:48:36Z

[DOC][MINOR] ml.feature Scala and Python API sync

I reviewed Scala and Python APIs for ml.feature and corrected discrepancies.

Built docs locally, ran style checks

Author: Bryan Cutler 

Closes #13159 from BryanCutler/ml.feature-api-sync.

(cherry picked from commit b1bc5ebdd52ed12aea3fdc7b8f2fa2d00ea09c6b)
Signed-off-by: Reynold Xin 

commit 642f00980f1de13a0f6d1dc8bc7ed5b0547f3a9d
Author: Zheng RuiFeng 
Date:   2016-05-15T14:59:49Z

[MINOR] Fix Typos

1,Rename matrix args in BreezeUtil to upper to match the doc
2,Fix several typos in ML and SQL

manual tests

Author: Zheng RuiFeng 

Closes #13078 from zhengruifeng/fix_ann.

(cherry picked from commit c7efc56c7b6fc99c005b35c335716ff676856c6c)
Signed-off-by: Reynold Xin 

commit 2126fb0c2b2bb8ac4c5338df15182fcf8713fb2f
Author: Sandeep Singh 
Date:   2016-05-19T09:44:26Z

[CORE][MINOR] Remove redundant set master in 
OutputCommitCoordinatorIntegrationSuite

Remove redundant set master in OutputCommitCoordinatorIntegrationSuite, as 
we are already setting it in SparkContext below on line 43.

existing tests

Author: Sandeep Singh 

Closes #13168 from techaddict/minor-1.

(cherry picked from commit 3facca5152e685d9c7da96bff5102169740a4a06)
Signed-off-by: Reynold Xin 

commit 1fc0f95eb8abbb9cc8ede2139670e493e6939317
Author: Andrew Or 
Date:   2016-05-20T05:40:03Z

[HOTFIX] Test compilation error from 52b967f

commit dd0c7fb39cac44e8f0d73f9884fd1582c25e9cf4
Author: Reynold Xin 
Date:   2016-05-20T05:46:08Z

Revert "[HOTFIX] Test compilation error from 52b967f"

This reverts commit 1fc0f95eb8abbb9cc8ede2139670e493e6939317.

commit f8d0177c31d43eab59a7535945f3dfa24e906273
Author: Davies Liu 
Date:   2016-05-18T23:02:52Z

Revert "[SPARK-15392][SQL] fix default value of size estimation of logical 
plan"

This reverts commit fc29b896dae08b957ed15fa681b46162600a4050.

(cherry picked from commit 84b23453ddb0a97e3d81306de0a5dcb64f88bdd0)
Signed-off-by: Reynold Xin 

commit 2ef645724a7f229309a87c5053b0fbdf45d06f52
Author: Takuya UESHIN 
Date:   2016-05-20T05:55:44Z

[SPARK-15313][SQL] EmbedSerializerInFilter rule should keep exprIds of 
output of surrounded SerializeFromObject.

## What changes were proposed in this pull request?

The following code:

```
val ds = Seq(("a", 1), ("b", 2), ("c", 3)).toDS()
ds.filter(_._1 == "b").select(expr("_1").as[String]).foreach(println(_))
```

throws an Exception:

```
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding 
attribute, tree: _1#420
 at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:50)
 at 
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
 at 
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)

...
 Cause: java.lang.RuntimeException: Couldn't find _1#420 in [_1#416,_2#417]
 at scala.sys.package$.error(package.scala:27)
 at 
org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94)
 at