Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12850#discussion_r65272379
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -742,6 +742,23 @@ object
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12850
Hi, @cloud-fan .
Now, I made a new rule `ReorderAssociativeOperator` as you recommended.
Jira issue and PR description are updated together, too.
---
If your project is set up
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/13358
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13352#issuecomment-222115643
This removes lots of deprecation warning messages like the followings.
```
/home/jenkins/workspace/SparkPullRequestBuilder/mllib/src/main/scala/org/apache
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13358
[SPARK-15612][SQL] Raise exception if decimal `scale` >= `precision`
## What changes were proposed in this pull request?
Currently, Spark raises exceptions only when decimal `sc
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13352#issuecomment-06324
Hi, @andrewor14 .
This is about deprecation warnings about `SQLContext`s.
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13352#issuecomment-16523
Oh, thank you, @andrewor14 . I see.
I will make another PR for using `builder.sparkContext(sc)` pattern.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13349#issuecomment-16623
Thank you again, @andrewor14 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13365
[SPARK-15618][SQL][MLLIB] Use SparkSession.builder.sparkContext if
applicable.
## What changes were proposed in this pull request?
This PR changes function
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13365#issuecomment-75399
Hi, @andrewor14 .
Could you review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13365
Thank you, @andrewor14 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13420
Thank you, @srowen and @andrewor14 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13420#discussion_r65286389
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -93,7 +93,7 @@ private[sql] object Dataset {
* to some files
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13449
Thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
Oh, thank you! @cloud-fan .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13464#discussion_r65584402
--- Diff: dev/checkstyle.xml ---
@@ -157,7 +157,8
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13349#issuecomment-222054344
Hi, @andrewor14 .
This is the PR for SPARK-15583.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13349
[SPARK-15584][SQL] Abstract duplicate code: `spark.sql.sources.` properties
## What changes were proposed in this pull request?
This PR replaces `spark.sql.sources.` strings
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13349#issuecomment-222059440
Thank YOU for pinging me, @andrewor14 .
For me, it's the most difficult to find the real issue. :)
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13341#discussion_r64847876
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -255,6 +255,23 @@ case class
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13352
[SPARK-15603][MLLIB] Replace SQLContext with SparkSession in ML/MLLib
## What changes were proposed in this pull request?
This PR replaces all deprecated `SQLContext` occurrences
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
Hi, @cloud-fan .
It's ready for review again.
Could you review this when you have some time?
Thank you always!
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12850#discussion_r65464887
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ReorderAssociativeOperatorSuite.scala
---
@@ -0,0 +1,59
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12850#discussion_r65465190
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -738,6 +739,49 @@ object
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
@cloud-fan .
According to your advice, I refactored the code and added
mixed(addition+multiplication) testcases. Also, the PR description is updated.
Thank you so much again
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13449
[SPARK-15709][SQL] Prevent `freqItems` from raising
`UnsupportedOperationException: empty.min`
## What changes were proposed in this pull request?
Currently, `freqItems` raises
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13449
Thank you for review, @srowen . :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13436
@srowen .
Could you review this PR, too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
Thank you for feedback. I'm really happy with your attention!
For the non-deterministic part, we can add a single condition in
`isAssociativelyFoldable`.
If some of operand
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
Thank you for deep discussion on this. I think like this.
For 1), there are **machine-generated** queries by BI tools. This is an
important category of queries. In many cases, BIs
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
Hi, @cloud-fan and @davies .
How do you think about the above?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
I added the missing part, `e.deterministic` check in
`isAssociativelyFoldable`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Rebased.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12850
Thank you for reconsidering this PR positively. I'll update soon according
to your advice.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12850#discussion_r65465517
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -738,6 +739,49 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12850#discussion_r65485176
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -738,6 +739,42 @@ object
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13436
[SPARK-15696][SQL] Improve `crosstab` to have a consistent column order
## What changes were proposed in this pull request?
Currently, `crosstab` returns a Dataframe having **random
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13403
[SPARK-15660][CORE] RDD and Dataset should show the consistent values for
variance/stdev.
## What changes were proposed in this pull request?
In Spark-11490, `variance/stdev
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13403
Thank you for review, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13403#issuecomment-222648543
Hi, @rxin .
I updated the example more practically by using
**SparkSession.createDataset().rdd.stdev**.
If we must preserve the current behavior
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12850
Hi, @cloud-fan .
Could you review again?
Now, this PR provides a more generalized way to handle all foldable
constants.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13365#issuecomment-72318
At this time, Scala 2.10 build is also tested locally.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13832#discussion_r68013403
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/OffHeapColumnVector.java
---
@@ -424,7 +424,9 @@ public void loadBytes
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r68007866
--- Diff: core/src/main/scala/org/apache/spark/rdd/DoubleRDDFunctions.scala
---
@@ -74,6 +74,20 @@ class DoubleRDDFunctions(self: RDD[Double]) extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r68008205
--- Diff: core/src/main/scala/org/apache/spark/rdd/DoubleRDDFunctions.scala
---
@@ -74,6 +74,20 @@ class DoubleRDDFunctions(self: RDD[Double]) extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13684
I updated those two stuffs.
Thank you, @shivaram .
Yep. It's @sun-rui 's. It would be great.
Hi, @sun-rui .
Could you review this PR?
---
If your project is set up
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13684#discussion_r67213090
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1869,6 +1869,7 @@ setMethod("where",
#' path <- "path/to/file.json"
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13684
Now, it's ready for review again, @shivaram .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13684
Oh, thank you, @shivaram ! I merged those functions into one according to
your advice.
In my opinion, there are more functions we can simplify like this.
---
If your project is set up
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13674
[MINOR][DOCS][SQL] Fix some comments about types(TypeCoercion,Partition)
and exceptions.
## What changes were proposed in this pull request?
This PR contains a few changes on code
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13674
Thank you for review and merging, @andrewor14 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13643
Thank you so much, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13714#discussion_r67440657
--- Diff: examples/src/main/r/data-manipulation.R ---
@@ -75,8 +75,8 @@ destDF <- select(flightsDF, "dest", "cancelled")
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67444157
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67444126
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13721
Now, it passed all tests and become ready for review again.
Could you review this PR, @shivaram ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13684
If there is something to do more, please let me know.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13721
Thank you, @felixcheung .
For `SPARK-14995`, I'll do that tonight. It looks good as an exercise for
me.
Thank you for let me know that.
---
If your project is set up for it, you can
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13721
[SPARK-16005][R] Add `randomSplit` to SparkR
## What changes were proposed in this pull request?
This PR adds `randomSplit` to SparkR for API parity.
## How was this patch
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67443648
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13684
Thank you, @sun-rui !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67443758
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67443819
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67443885
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67445221
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2884,3 +2884,38 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13714
Thank you, @shivaram !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13684#discussion_r67412381
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1949,14 +1950,24 @@ setMethod("where",
#' path <- "path/to/file.json"
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13684#discussion_r67412795
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1949,14 +1950,24 @@ setMethod("where",
#' path <- "path/to/file.json"
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13730#discussion_r67481562
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -351,8 +354,10 @@ private[sql] object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13734#discussion_r67556931
--- Diff: R/pkg/R/SQLContext.R ---
@@ -213,7 +213,7 @@ createDataFrame <- function(x, ...) {
#' @aliases createDataFrame
#' @exp
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13734
Hi, All.
For review, I uploaded the generated R doc here.
https://home.apache.org/~dongjoon/spark-2.0.0-docs/api/R/
The remaining issue is the **multiple** notes like
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13734#discussion_r67557255
--- Diff: R/pkg/R/generics.R ---
@@ -20,157 +20,196 @@
# @rdname aggregateRDD
# @seealso reduce
# @export
+# @note since 1.5.0
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13734
Thank you for review, @felixcheung !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r67567009
--- Diff: core/src/main/scala/org/apache/spark/util/StatCounter.scala ---
@@ -104,8 +104,11 @@ class StatCounter(values: TraversableOnce[Double
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13721
Hi, @shivaram .
Although it seems to be late for becoming a part of the Spark 2.0.0, could
you review again ?
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13774#discussion_r67672158
--- Diff: R/pkg/R/functions.R ---
@@ -911,6 +911,33 @@ setMethod("minute",
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13768
Thank you for review, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13774#discussion_r67721810
--- Diff: R/pkg/R/functions.R ---
@@ -911,6 +911,33 @@ setMethod("minute",
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13765
Thank you for comments!
I see. No problem! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r67903762
--- Diff: core/src/test/java/org/apache/spark/JavaAPISuite.java ---
@@ -733,8 +733,10 @@ public Boolean call(Double x) {
assertEquals(20/6.0
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r67904001
--- Diff: core/src/main/scala/org/apache/spark/rdd/DoubleRDDFunctions.scala
---
@@ -47,12 +47,12 @@ class DoubleRDDFunctions(self: RDD[Double
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r67922737
--- Diff: core/src/test/java/org/apache/spark/JavaAPISuite.java ---
@@ -733,8 +733,10 @@ public Boolean call(Double x) {
assertEquals(20/6.0
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Thank you, @srowen .
I updated the PR according to one of your advices.
For the other advice, I tried like the following. It looks good, but a
little bit inconsistent
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Oh, thank you, @mengxr !
I'll update again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Thank you for reviewing this PR, @mengxr and @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r67931118
--- Diff: core/src/test/scala/org/apache/spark/PartitioningSuite.scala ---
@@ -244,6 +244,10 @@ class PartitioningSuite extends SparkFunSuite
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Hi, @mengxr .
I updated them to use accurate values and small tolerances, too.
```
-assert(abs(2.0 - rdd.sampleVariance) < 0.01)
-assert(abs(1.41 - rdd.sampleSt
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Hi, @srowen .
Now, I fixed them all. Sorry for missing those.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13403#discussion_r67931444
--- Diff: core/src/test/scala/org/apache/spark/PartitioningSuite.scala ---
@@ -244,6 +244,10 @@ class PartitioningSuite extends SparkFunSuite
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13763
Thank you for review, @sun-rui .
I fixed all occurrence; `a ORC` with `an ORC`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13721
Thank you for merging, @shivaram .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13734
Thank you for opinions! I'll revise and update the html doc.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13486#discussion_r67581319
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataFrameReaderWriterSuite.scala
---
@@ -572,4 +572,16 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67583977
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2908,3 +2908,39 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13721#discussion_r67583918
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2908,3 +2908,39 @@ setMethod("write.jdbc",
write <- callJMethod(write,
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13486#discussion_r67578679
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataFrameReaderWriterSuite.scala
---
@@ -572,4 +572,16 @@ class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13721
@shivaram . I added the description. Thank you for review!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13403
Hi, @rxin and @srowen .
Now, I update this PR like the following.
1. Update the documentation of legacy Scala/Java API more clearly
2. Add `popVariance/popStdev` functions
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13763
Hi, @shivaram , @felixcheung , @sun-rui .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear
101 - 200 of 7331 matches
Mail list logo