Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19831
Instead of manually setting up table statistics, I'm just trying to
simulate the statistics for these tables by this way.
If `totalSize (or rawDataSize) > 0` and `rowCount = 0`, at least
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19831
cc @gatorsmile @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19858
[SPARK-22489][DOC][FOLLOWUP] Update broadcast behavior changes in migration
section
## What changes were proposed in this pull request?
Update broadcast behavior changes in migration
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19831
Yes, I saw some of these tables in my cluster, but the user did not
manually modify this parameter:
```
# Detailed Table Information
Databasedw
Table
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19831#discussion_r154245570
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -418,7 +418,7 @@ private[hive] class HiveClientImpl
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19831#discussion_r154069687
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -418,7 +418,7 @@ private[hive] class HiveClientImpl
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19831
If CBO enabled, the [`outputRowCount ==
0`](https://github.com/apache/spark/pull/19831#L67), the
[`getOutputSize`](https://github.com/apache/spark/pull/19831#L60) is 1,
`sizeInBytes` is 1
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19714#discussion_r153736381
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -153,6 +151,27 @@ abstract class SparkStrategies extends
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19831
cc @wzhfy
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19831
[SPARK-22489][SQL] Wrong Hive table statistics may trigger OOM if enables
join reorder in CBO
## What changes were proposed in this pull request?
How to reproduce:
```basg
bin
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19714
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19714#discussion_r153401195
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -153,6 +152,27 @@ abstract class SparkStrategies extends
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19714#discussion_r153401036
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/joins/BroadcastJoinSuite.scala
---
@@ -223,4 +223,36 @@ class BroadcastJoinSuite extends
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17520
@nsyca Can you resolve conflicts?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/19804
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19804
[WIP][SPARK-22573][SQL] Shouldn't inferFilters if it contains
SubqueryExpression
## What changes were proposed in this pull request?
Shouldn't inferFilters if it contains
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19714
cc @gatorsmile @hvanhovell
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r151197868
--- Diff: core/src/main/resources/org/apache/spark/ui/static/utils.js ---
@@ -46,3 +46,31 @@ function formatBytes(bytes, type) {
var i = Math.floor
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19560
I also hint this issues:
```sql
select * from A join B on a.key = b.key
```
table A is small but table B is big and table B's stats are incorrect. so
It will Broadcast table B
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r151034910
--- Diff: core/src/main/resources/org/apache/spark/ui/static/utils.js ---
@@ -46,3 +46,25 @@ function formatBytes(bytes, type) {
var i = Math.floor
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r151033625
--- Diff: core/src/main/resources/org/apache/spark/ui/static/utils.js ---
@@ -46,3 +46,25 @@ function formatBytes(bytes, type) {
var i = Math.floor
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/19727
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150863877
--- Diff: core/src/main/resources/org/apache/spark/ui/static/utils.js ---
@@ -46,3 +46,25 @@ function formatBytes(bytes, type) {
var i = Math.floor
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150842538
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -37,11 +37,6 @@ function makeIdNumeric(id) {
return resl
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150842384
--- Diff: core/src/main/resources/org/apache/spark/ui/static/utils.js ---
@@ -46,3 +46,25 @@ function formatBytes(bytes, type) {
var i = Math.floor
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150841579
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -352,7 +352,7 @@ class HistoryServerSuite extends
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
Generated by the `toLocaleTimeString ()` method:
![image](https://user-images.githubusercontent.com/5399861/32780670-c2d61678-c907-11e7-837f-f01503d22aec.png
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
It looks like this:
https://user-images.githubusercontent.com/5399861/32779430-be47bac8-c978-11e7-97d1-fd3e3db9166f.png
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
It is client local time zone:
https://github.com/apache/spark/blob/master/core/src/main/resources/org/apache/spark/ui/static/historypage-common.js#L22
But the format is `11/14/2017, 7
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
How about this?
https://user-images.githubusercontent.com/5399861/32773266-8f9c1ce2-c963-11e7-8b9d-dba785f71772.png
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150538938
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -38,8 +38,17 @@ function makeIdNumeric(id) {
}
function
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150430610
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -38,8 +38,17 @@ function makeIdNumeric(id) {
}
function
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19640#discussion_r150430605
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -38,8 +38,17 @@ function makeIdNumeric(id) {
}
function
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19727
[WIP][SPARK-22497][SQL] Project reuse
## What changes were proposed in this pull request?
The below SQL will scan `table1` twice. This PR reuse the `p1` and scan
`table1` once.
```sql
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19714#discussion_r150368276
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -154,12 +158,12 @@ abstract class SparkStrategies extends
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
@cloud-fan For the UI part, how about this PR:
https://github.com/apache/spark/pull/14577
---
-
To unsubscribe, e-mail: reviews
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19714
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19714
[SPARK-22489][SQL] Shouldn't change broadcast join buildSide if user
clearly specified
## What changes were proposed in this pull request?
How to reproduce:
```scala
import
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19670
[SPARK-22454][CORE] ExternalShuffleClient.close() should check
clientFactory null
## What changes were proposed in this pull request?
`ExternalShuffleClient.close()` should check
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
@srowen We can configure time zone by`spark.history.timeZone` now.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/19322
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19322
This issue was fixed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19640
@srowen I have read the discussion about this issue by
https://github.com/apache/spark/pull/15803. But users feel that the current
readability is still bad
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19640
[SPARK-16986][WEB-UI] Replace GMT with history server side TimeZone.
## What changes were proposed in this pull request?
Replace GMT with history server side TimeZone. This both works
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17886
@gatorsmile master branch still has this issue.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/18841
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
cc @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19567
Please add tests to
https://github.com/apache/spark/blob/master/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18527
Retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user wangyum closed the pull request at:
https://github.com/apache/hive/pull/221
---
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19322#discussion_r140503940
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveCliSessionStateSuite.scala
---
@@ -27,12 +28,12 @@ import
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19322
cc @cloud-fan @yaooqinn
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19322
[SPARK-22102][SQL] Set ConfVars.METASTOREWAREHOUSE before constructor
CliSessionState
## What changes were proposed in this pull request?
This PR set `ConfVars.METASTOREWAREHOUSE` before
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/19259
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18853#discussion_r139719600
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -352,11 +374,16 @@ object TypeCoercion
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19259
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19259
@gatorsmile Yes, Docker integration tests passed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19259
[BACKPORT-2.1][SPARK-19318][SPARK-22041][SQL] Docker test case failure:
`SPARK-16625: General data types to be mapped to Oracle`
⦠options in case-sensitive manner.
## What changes were
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
I provide 2 SQL scripts to validate the different result between Spark and
Hive:
| Engine |
[SPARK_21646_1.txt](https://github.com/apache/spark/files/1305185/SPARK_21646_1.txt
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19231#discussion_r138950451
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -775,24 +775,23 @@ object JdbcUtils extends
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19231#discussion_r138948476
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -993,7 +996,10 @@ class JDBCSuite extends SparkFunSuite
Seq
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19231#discussion_r138946335
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -775,24 +775,23 @@ object JdbcUtils extends
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19231
[SPARK-22002][SQL] Read JDBC table use custom schema support specify
partial fields.
## What changes were proposed in this pull request?
https://github.com/apache/spark/pull/18266 add
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18266
@gatorsmile Done
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r138621092
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -80,7 +80,7 @@ object JDBCRDD extends Logging
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
CC @gatorsmile, @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19169#discussion_r137982471
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CurrentUser.scala
---
@@ -0,0 +1,47 @@
+/*
--- End diff
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18853#discussion_r137965884
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1966,7 +1966,7 @@ class DataFrameSuite extends QueryTest
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r137943044
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -80,7 +80,7 @@ object JDBCRDD extends Logging
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18266
Yes, mapping to Double seems fine. this test passed:
```
test("SPARK-20427/SPARK-20921: read table use custom schema by jdbc api")
{
// default will throw IllegalArgumen
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r135025945
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -768,6 +769,25 @@ object JdbcUtils extends
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r135025661
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -82,7 +82,7 @@ object JDBCRDD extends Logging
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18266
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19002
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19002#discussion_r134144956
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -39,7 +39,6 @@ import org.apache.spark.sql.catalyst.plans.PlanTest
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/19002
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18986
@gatorsmile Seems cast to `double` is correct.
```
hive> create table spark_21646(c1 string, c2 string);
hive> insert into spark_21646
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/19002#discussion_r134104208
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -39,7 +39,6 @@ import org.apache.spark.sql.catalyst.plans.PlanTest
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19002
[SPARK-21790][TESTS][FOLLOW-UP] Add filter pushdown verification back.
## What changes were proposed in this pull request?
The previous PR(https://github.com/apache/spark/pull/19000
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18986
There is a issue if a string value out of double range, see:
[SPARK-21646](https://issues.apache.org/jira/browse/SPARK-21646).
---
If your project is set up for it, you can reply to this email
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/19000
[SPARK-21790][TESTS] Fix Docker-based Integration Test errors.
## What changes were proposed in this pull request?
[SPARK-17701](https://github.com/apache/spark/pull/18600/files#diff
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18266
How about below? similar to [Hive
JDBCStorageHandler](https://github.com/apache/hive/blob/rel/release-2.3.0/ql/src/test/queries/clientpositive/jdbc_handler.q#L1-L17):
```sql
CREATE TEMPORARY
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/14377
@gatorsmile We can consider merging this PR:
https://github.com/apache/spark/pull/18266.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
Thanks @maropu, There are some problems:
```:sql
spark-sql> select "20" > "100";
true
spark-sql>
```
So [`tmap.tkey <
100`](https://github.
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18879
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
Casting the `int` values into `string` can works, but filter a int column
with string type feels terrible.
My opinion is cast filter value to column type.
---
If your project is set up
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
@gatorsmile Docs syntax issues was fixed by
https://github.com/apache/spark/pull/18793.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18833#discussion_r131566961
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/MathExpressionsSuite.scala
---
@@ -403,11 +403,13 @@ class
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18833#discussion_r131566102
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/MathExpressionsSuite.scala
---
@@ -403,11 +403,13 @@ class
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/hive/pull/221
[HIVE-17240] ACOS(2) and ASIN(2) should be null
see: https://issues.apache.org/jira/browse/HIVE-17240
You can merge this pull request into a Git repository by running:
$ git pull https
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
OK, I will try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18769#discussion_r131534966
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala
---
@@ -87,6 +88,13 @@ case class SetCommand(kv: Option[(String
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18841#discussion_r131525404
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -170,29 +193,29 @@ case class Pi() extends
701 - 800 of 992 matches
Mail list logo