Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15136
**[Test build #65554 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65554/consoleFull)**
for PR 15136 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15136
[SPARK-17581] [SQL] Invalidate Statistics After Some ALTER TABLE Commands
### What changes were proposed in this pull request?
In the recent statistics-related work, our focus is on how to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14971
**[Test build #65553 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65553/consoleFull)**
for PR 14971 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14971
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15131#discussion_r79298408
--- Diff: R/pkg/R/context.R ---
@@ -225,6 +225,37 @@ setCheckpointDir <- function(sc, dirName) {
invisible(callJMethod(sc, "setCheckpointDir",
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15131#discussion_r79298358
--- Diff: R/pkg/R/context.R ---
@@ -225,6 +225,37 @@ setCheckpointDir <- function(sc, dirName) {
invisible(callJMethod(sc, "setCheckpointDir",
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15131
I just took a look. The problematic code is here,
[SparkContext.scala#L1429](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L1429).
Github user petermaxlee commented on the issue:
https://github.com/apache/spark/pull/15135
Isn't it as simple as
```
cols = [x for x in df.columns if x != "key]
df.groupby("key").agg([F.min(x) for x in cols] + [F.max(x) for x in cols])
```
---
If your project is set
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15135
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user citoubest opened a pull request:
https://github.com/apache/spark/pull/15135
[pyspark][group]pyspark GroupedData can't apply agg functions on all left
numeric columns.
## What changes were proposed in this pull request?
With pyspark dataframe, the agg method just
301 - 310 of 310 matches
Mail list logo