Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/23268
@HyukjinKwon I've updated the desc.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comman
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/23268
@HyukjinKwon Please re-review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/23268
I've revert the refactor commit.
I'm wondering if I need to create a issue for a unit test only PR.
---
-
To u
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/23268
OK, I will modify the PR several hours later.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/23268#discussion_r240103941
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveShim.scala
---
@@ -53,19 +53,12 @@ private[hive] object HiveShim {
* This function
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/23268
Nevermind, just do not like the coding style personally.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/23268#discussion_r240098806
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveShim.scala
---
@@ -53,19 +53,12 @@ private[hive] object HiveShim {
* This function
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/23268#discussion_r240098620
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveShim.scala
---
@@ -53,19 +53,12 @@ private[hive] object HiveShim {
* This function
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/23268
[Hive][Minor] Refactor on HiveShim and Add Unit Tests
## What changes were proposed in this pull request?
Refactor on HiveShim, and add Unit Tests.
## How was this patch tested
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22685
OK. Nevermind.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user sadhen closed the pull request at:
https://github.com/apache/spark/pull/22685
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22685
@HyukjinKwon Sorry. Happened to find that some code is not elegant
according to my taste. I have not got the fact that massive changes are not
good for backporting. As a result, I spent some time
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22685#discussion_r224054521
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -147,7 +147,7 @@ private[sql] object
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22685#discussion_r224054362
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetrics.scala
---
@@ -95,7 +95,7 @@ object SQLMetrics {
def
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22685#discussion_r224053894
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -567,10 +567,10 @@ class KeyValueGroupedDataset[K, V] private[sql
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22685#discussion_r224054020
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -188,7 +188,7 @@ private[sql] object SQLUtils extends Logging
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22685#discussion_r224053653
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala
---
@@ -123,7 +123,7 @@ object StaticSQLConf {
val
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22685
@HyukjinKwon OK, most of changes are related to parens. I will consider to
reset it back for backporting friendliness.
---
-
To
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22685
[SQL][MINOR][Refactor] Refactor sql/core
## What changes were proposed in this pull request?
Only minor changes on Scala syntax.
## How was this patch tested?
Existing Tests
You
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22577
@HyukjinKwon Yes, just confirmed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22577
@cloud-fan The bug is introduced by #19748
branch-2.2 is fine, but branch-2.3 should be fixed.
---
-
To unsubscribe, e
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22577
@cloud-fan
see:
https://github.com/scala/scala/blob/2.12.x/src/compiler/scala/tools/nsc/typechecker/RefChecks.scala
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22577
Error Message:
```
$ dev/change-scala-version.sh 2.12
â spark git:(branch-2.4) â sbt -Dscala.version=2.12.7
sbt (spark)> project core
sbt (core)> clean
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22577
Dear Maintainers, please backport it to branch-2.4 if it is accepted.
@cloud-fan @srowen
---
-
To unsubscribe, e-mail: reviews
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22577
[CORE][MINOR] Fix obvious error and compiling for Scala 2.12.7
## What changes were proposed in this pull request?
Fix an obvious error.
## How was this patch tested
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22355
@maropu there are tips to view the codegen:
> spark.sql("explain codegen select 1 + 1").show(false)
You may explain an aggregation, and the NoOp che
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22355#discussion_r216116648
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala
---
@@ -0,0 +1,83
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22355#discussion_r215862272
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala
---
@@ -0,0 +1,83
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22355#discussion_r215859282
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala
---
@@ -0,0 +1,83
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22310
The problem of package hierarchy is fixed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22310
@srowen Sorry I should have explained why I made these changes.
The follow steps failed to compile:
```
$ ./dev/change-scala-version.sh 2.12
$ ./build/sbt -Dscala-2.12
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22308
@srowen The 2.12 jar is compiled and packaged from `Main.scala` and
`MyCoolClass.scala`. Not a copy of 2.10 jar. Diff it, you will verify it.
The steps to generate it:
```
mvn
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22308
@cloud-fan @srowen
This PR will make our jenkins build green.
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-ubuntu-scala-2.12
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22308
see https://github.com/apache/spark/pull/12924 and
https://github.com/apache/spark/pull/11744
---
-
To unsubscribe, e-mail
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22310
@srowen It is about the sbt convention.
see my demo project: https://github.com/sadhen/spark-25298
---
-
To unsubscribe
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22308
Remove the deprecated 2.10 jar.
And build a 2.12 jar.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22310
@srowen This is a better solution for #22280
No hard-coded 2.11 or 2.12 any more.
---
-
To unsubscribe, e-mail
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22310
[Spark-25298][Build] Improve build definition for Scala 2.12
## What changes were proposed in this pull request?
Improve build for Scala 2.12.
## How was this patch tested
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22308
[SPARK-25304][SQL][TEST] Fix HiveSparkSubmitSuite SPARK-8489 test for Scala
2.12
## What changes were proposed in this pull request?
remove test-2.10.jar and add test-2.12.jar
Github user sadhen closed the pull request at:
https://github.com/apache/spark/pull/22304
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22304
@srowen @cloud-fan The merged #22292 already fixed it.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22304#discussion_r214342975
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/FileBasedWriteAheadLog.scala
---
@@ -65,7 +65,8 @@ private[streaming] class
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22304
There is simplified project for this specific problem.
https://github.com/sadhen/blocking-future
---
-
To unsubscribe, e
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22304
[SPARK-25297][Streaming][Test] Fix blocking unit tests for Scala 2.12
## What changes were proposed in this pull request?
Customize ExecutorContext's reporter to fix blocking unit test
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
@srowen A PR for this "bug" is proposed:
https://github.com/scala/scala/pull/7156
Hopefully, Scala 2.12.7 w
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
scala/bug#11123 had been added into
https://github.com/scala/bug/milestone/93 .
I will spare some time working on it
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
```
Welcome to
__
/ __/__ ___ _/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0-SNA
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
Comment resolved, please review. @srowen @maropu
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22264#discussion_r214069522
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/QueryTest.scala ---
@@ -290,6 +290,16 @@ object QueryTest {
Row.fromSeq(row.toSeq.map
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22280
This PR together with #22264 should make the scala-2.12 jenkins job work.
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-ubuntu-scala
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22280
This PR intends to fix the build error by mvn.
I have no idea about
`scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala` .
Maybe we should spare some time
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22264#discussion_r214064282
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/QueryTest.scala ---
@@ -290,6 +290,16 @@ object QueryTest {
Row.fromSeq(row.toSeq.map
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22280
First, `scala-2.12` and `scala-2.11` is the conventions from sbt.
`scala-2.11` won't be compiled if we are using 2.12.
Actually, only
`scala-2.11/src/main/scala` is effective
i
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
The fix works for both 2.11 and 2.12.
And I reported a bug: https://github.com/scala/bug/issues/11123
---
-
To
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22280
[SPARK-25235] [Build] [SHELL] [FOLLOWUP] Fix repl compile for 2.12
## What changes were proposed in this pull request?
Error messages from
https://amplab.cs.berkeley.edu/jenkins/view/Spark
Github user sadhen closed the pull request at:
https://github.com/apache/spark/pull/22264
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
GitHub user sadhen reopened a pull request:
https://github.com/apache/spark/pull/22264
[SPARK-25256][SQL] Plan mismatch errors in Hive tests in 2.12
## What changes were proposed in this pull request?
### For `SPARK-5775 read array from
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
@srowen please review, and this PR should be rebased on #22260 and then
tested.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22260
@maropu @srowen please review
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22260#discussion_r213884979
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ProjectionOverSchema.scala
---
@@ -38,7 +38,7 @@ private[execution] case class
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22260#discussion_r213884559
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala ---
@@ -19,7 +19,7 @@ package org.apache.spark.sql.hive
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22260#discussion_r213884075
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala ---
@@ -19,7 +19,7 @@ package org.apache.spark.sql.hive
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
To simplify:
```
Array(Int.box(1)).toSeq == Array(Double.box(1.0)).toSeq
```
is `false` in `2.12.x` and is `true` in `2.11.x
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/22264
@srowen Still work in progress.
There are three unit test failures in total.
The first one is fixed.
And this is the cause of the second one:
```
Welcome to
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22264
[WIP][SPARK-25044][SQL] Plan mismatch errors in Hive tests in 2.12
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this
Github user sadhen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22260#discussion_r213573236
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ProjectionOverSchema.scala
---
@@ -38,7 +38,7 @@ private[execution] case class
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22260
[MINOR] Fix scala 2.12 build using collect
## What changes were proposed in this pull request?
Introduced by #21320
```
[error] [warn]
spark/sql/core/src/main/scala/org/apache/spark
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/22069
[MINOR][DOC] Fix Java example code in Column's comments
## What changes were proposed in this pull request?
Fix scaladoc in Column
## How was this patch tested?
None
You can
Github user sadhen closed the pull request at:
https://github.com/apache/spark/pull/21166
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user sadhen closed the pull request at:
https://github.com/apache/spark/pull/21166
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/21166
I have checked the latest master code.
``` scala
// If the executor is no longer running any scheduled tasks, mark
it as idle
if (executorIdToTaskIds.contains
GitHub user sadhen reopened a pull request:
https://github.com/apache/spark/pull/21166
[SPARK-11334][CORE] clear idle executors in executorIdToTaskIds keySet
## What changes were proposed in this pull request?
quote from #11205
> Executors may never be i
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/21166
[SPARK-11334][CORE] clear idle executors in executorIdToTaskIds keySet
## What changes were proposed in this pull request?
quote from #11205
> Executors may never be i
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/11205
@jerryshao I think the 2nd bullet has not been fixed in SPARK-13054.
I use spark 2.1.1, and I still find that finished tasks remain in
`private val executorIdToTaskIds = new
Github user sadhen closed the pull request at:
https://github.com/apache/spark/pull/21059
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/21059
@jiangxb1987
I re-investigated the logs and find that there must be bugs in the yarn
scheduler backend. And this PR is not the right way to fix the issue
GitHub user sadhen opened a pull request:
https://github.com/apache/spark/pull/21059
fix when numExecutorsTarget equals maxNumExecutors
## What changes were proposed in this pull request?
In dynamic allocation, there are cases that the `numExecutorsTarget` has
reached
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/14638
yes
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/14638
Since this pr won't be accepted. My solution is:
``` sql
# create table using spark-csv
CREATE TABLE tmp.cars
USING com.databricks.spark.csv
OPTIONS (path "/tmp/cars.cs
Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/14638
Also a useful feature for me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
80 matches
Mail list logo