Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/13119
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/13735
[SPARK-15328][MLLIB][ML] Word2Vec import for original binary format
## What changes were proposed in this pull request?
Add `loadGoogleModel()` function to import original wor2vec binary
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/13119
[SPARK-15328][MLLIB][ML] Word2Vec import for original binary format
## What changes were proposed in this pull request?
Add `loadGoogleModel()` function to import original wor2vec binary
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/14377
[SPARK-16625][SQL] General data types to be mapped to Oracle
## What changes were proposed in this pull request?
Spark will convert **BooleanType** to **BIT(1)**, **LongType
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
It will reduce the function call on
[CoarseGrainedSchedulerBackend.requestTotalExecutors()](https://github.com/apache/spark/blob/v2.1.0/core/src/main/scala/org/apache/spark/scheduler/cluster
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16819
[SPARK-16441][YARN] Set maxNumExecutor depends on yarn cluster resources.
## What changes were proposed in this pull request?
Dynamic set `spark.dynamicAllocation.maxExecutors` by cluster
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16990
@srowen @felixcheung
The SQL query is related to the file name, see:
https://github.com/apache/spark/blob/v2.1.0/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16819#discussion_r102365927
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1193,6 +1189,37 @@ private[spark] class Client
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17020
[SPARK-19693][SQL] Make the SET mapreduce.job.reduces automatically
converted to spark.sql.shuffle.partitions
## What changes were proposed in this pull request?
Make the `SET
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16990
I'm working on the tests fail.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16990
[SPARK-19660][CORE][SQL] Replace the configuration property names that are
deprecated in the version of Hadoop 2.6
## What changes were proposed in this pull request?
Replace all
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@vanzin We must pull the configuration from ResourceManager,
ResourceManager can't push.
So setting the max before each stage? This feels too frequent.
In fact, This is suitable
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@srowen . Dynamic set `spark.dynamicAllocation.maxExecutors` can avoid
some strange problems:
1. [Spark application hang when dynamic allocation is
enabled](https://issues.apache.org/jira
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103200223
--- Diff: python/pyspark/tests.py ---
@@ -1515,12 +1515,12 @@ def test_oldhadoop(self):
conf
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@vanzin What do you think about current approach? I have tested on a same
Spark hive-thriftserver, the `spark.dynamicAllocation.maxExecutors` wiil
decrease if I kill 4 NodeManager:
```
17
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16990
OK. I have reverted `set
hive.mapreduce.job.reduces.speculative.execution=false` to `set
hive.mapred.reduce.tasks.speculative.execution=false`.
---
If your project is set up for it, you can reply
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/15259
[SPARK-17685][SQL] Make SortMergeJoinExec's currentVars is null when
calling createJoinKey
## What changes were proposed in this pull request?
Fix thrown IndexOutOfBoundsException like
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/15259
@hvanhovell
The following SQL query thrown IndexOutOfBoundsException:
```sql
SELECT
count(int)
FROM
(
SELECT t1.int, t2.int2
FROM (SELECT * FROM
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/15466
@liancheng, Please help me review this PR, I have apply this patch to our
spark cluster.
![spark-hive-var](https://cloud.githubusercontent.com/assets/5399861/19689537/4ca5239e-9b00-11e6-912f
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/15466
[SPARK-13983][SQL] HiveThriftServer2 can not get "--hiveconf" or
''--hivevar" variables since 1.6 version (both multi-session and single session)
## What changes were proposed in th
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16252
[SPARK-18827][Core] Fix cannot read broadcast on disk
## What changes were proposed in this pull request?
Fix cannot read broadcast on disk
## How was this patch tested?
Add
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16252
`org.apache.spark.sql.kafka010.KafkaSourceStressForDontFailOnDataLossSuite.stress
test for failOnDataLoss=false` has succeeded on my local test.
---
If your project is set up for it, you can
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16252
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16252#discussion_r92396555
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -694,7 +694,7 @@ private[storage] class PartiallyUnrolledIterator[T
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16252#discussion_r92439275
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -694,7 +694,7 @@ private[storage] class PartiallyUnrolledIterator[T
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16252
@srowen @viirya I have added it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16122
@mallman OK, thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16252
@srowen I have restored it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16252#discussion_r91969759
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -694,7 +694,7 @@ private[storage] class PartiallyUnrolledIterator[T
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16527#discussion_r95377515
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -409,7 +409,8 @@ class JobProgressListener(conf: SparkConf) extends
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16527
I don't quit understand @srowen mentioned. so I simply changed it to drop
`dataSize - retainedSize + retainedSize / 10` items at a time.
If max is 100 and there are 150 items, it would drop
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16527
I use following code log trim stages/jobs time consuming:
```:scala
/** If stages is too large, remove and garbage collect old stages */
private def trimStagesIfNecessary(stages
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16122
@mallman MySQL and version is 5.6.29
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16122
This patch fails because hive-0.12 and hive-0.13 doesn't has `getMetaConf`
method.
see [HIVE-7532](https://issues.apache.org/jira/browse/HIVE-7532),
---
If your project is set up for it, you
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16079#discussion_r90250928
--- Diff: sbin/spark-daemon.sh ---
@@ -176,11 +175,11 @@ run_command() {
case "$mode" in
(class)
- execute_comma
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16079
[SPARK-18645][Deploy] Fix spark-daemon.sh arguments error lead to throws
Unrecognized option
## What changes were proposed in this pull request?
spark-daemon.sh will lost single quotes
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16079#discussion_r90244270
--- Diff: sbin/spark-daemon.sh ---
@@ -124,9 +124,8 @@ if [ "$SPARK_NICENESS" = "" ]; then
fi
execute_command() {
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16122
[SPARK-18681][SQL] Fix filtering to compatible with partition keys of type
int
## What changes were proposed in this pull request?
Cloudera's Hive default configuration
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16526
Drop more elements when stageData.taskData.size > retainedTasks
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
##
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/16526
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/16527
[SPARK-19146][Core]Drop more elements when stageData.taskData.size >
retainedTasks
## What changes were proposed in this pull request?
Drop more elements when `stageData.taskData.s
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17362
@weiqingy is doing [Allow adding jars from
hdfs](https://github.com/apache/spark/pull/17342).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17442
[SPARK-20107][SQL] Speed up HadoopMapReduceCommitProtocol#commitJob for
many output files
## What changes were proposed in this pull request?
Set
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17449
[SPARK-20120][SQL] spark-sql support silent mode
## What changes were proposed in this pull request?
It is similar to Hive silent mode, just show the query result. see: [Hive
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/15466
@srowen please help to review, thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/17505#discussion_r109851825
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -694,12 +694,25 @@ private[hive] class HiveClientImpl
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/17505#discussion_r109846638
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -242,6 +251,16 @@ private[client] class Shim_v0_12 extends Shim
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/17505#discussion_r109852106
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -694,12 +694,25 @@ private[hive] class HiveClientImpl
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17505
[SPARK-20187][SQL] Replace loadTable with moveFile to speed up load table
for many output files
## What changes were proposed in this pull request?
[HiveClientImpl.loadTable](https
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/17505
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17558
`SparkContext` support `add jar`, but doesn't support `uninstall jar`.
Imagine that I have a spark-sql or
[spark-thriftserver](https://github.com/apache/spark/tree/v2.1.0/sql/hive-thriftserver
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/17637
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17637
[SPARK-20337][CORE] Support upgrade a jar dependency and don't restart
SparkContext
## What changes were proposed in this pull request?
Support upgrade a jar dependency and don't restart
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17162
[SPARK-19550][SparkR][DOCS] Update R document to use JDK8
## What changes were proposed in this pull request?
Update R document to use JDK8.
## How was this patch tested
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17162
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/13735
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17020
@srowen The hadoop
[DeprecatedProperties](https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html)
`mapred.reduce.tasks` can automatically converted
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17558
[SPARK-20247][CORE] Add jar but this jar is missing later shouldn't affect
jobs that doesn't use this jar
## What changes were proposed in this pull request?
Catch exception when jar
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17505
@gatorsmile You are right. It is similar to
[HIVE-12908](https://github.com/apache/hive/commit/26268deb4844d3f3c530769c6276b17b0c6caaa0).
There are 3 bottlenecks for many output files
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
@viirya Spark does not support that.
see:https://github.com/apache/spark/pull/17223#issuecomment-286608743
@dongjoon-hyun How about throw exception when users try to change them as
@cloud-fan
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18769
[SPARK-21574][SQL] Fix set hive.exec.max.dynamic.partitions lose effect.
## What changes were proposed in this pull request?
How to reproduce:
```scala
val data = (0 until 1001
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
`SetCommand.scala` throws exception seems roughly. `InsertIntoHiveTable`
throws exception seems too late. so I logWarning at `SetCommand.scala`
---
If your project is set up for it, you can reply
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
I'll fix it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18808
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
@HyukjinKwon Could you help review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18833
[SPARK-21625][SQL] sqrt(negative number) should be null.
## What changes were proposed in this pull request?
This PR makes `sqrt(negative number)` to null, same as Hive and MySQL
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18841
[SPARK-21635][SQL] ACOS(2) and ASIN(2) should be null
## What changes were proposed in this pull request?
This PR makes ACOS(2) and ASIN(2) to null, same MySQL.
I have submit a [patch
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18841
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/14377
@gatorsmile We can consider merging this PR:
https://github.com/apache/spark/pull/18266.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
Thanks @maropu, There are some problems:
```:sql
spark-sql> select "20" > "100";
true
spark-sql>
```
So [`tmap.tkey <
100`](https://github.
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/18361
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18361#discussion_r127190632
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -197,7 +197,7 @@ class HadoopMapRedWriteConfigUtil[K, V: ClassTag
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r128482157
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r128482031
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r127601368
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1186,3 +1186,51 @@ case class BRound(child
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r130021930
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1219,44 +1219,91 @@ case class WidthBucket
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18323
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18466
Yes, I reproduce it by Yarn cluster, local mode can't reproduce, It seems
`DownloadCallback` doesn't really work.
---
If your project is set up for it, you can reply to this email and have your
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18490
[SPARK-21269][Core][WIP] Fix FetchFailedException when enable
maxReqSizeShuffleToMem and KryoSerializer
## What changes were proposed in this pull request?
Spark **cluster** can
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18466
@dongjoon-hyun Try the following to reproduce, I missed
`spark.serializer=org.apache.spark.serializer.KryoSerializer`, this is my
default config:
```
spark-shell --conf
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18527
[SPARK-21101][SQL] Catch IllegalStateException when CREATE TEMPORARY
FUNCTION
## What changes were proposed in this pull request?
It must `override` [`public StructObjectInspector
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18527
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18466
[SPARK-21253][CORE] Disable use DownloadCallback fetch big blocks
## What changes were proposed in this pull request?
Disable use DownloadCallback fetch big blocks.
## How
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18466
@zsxwing @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18445
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18445
Please setting your username and checking your email:
https://help.github.com/articles/setting-your-username-in-git/#platform-linux
https://help.github.com/articles/setting-your-email-in-git
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18841#discussion_r131525404
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -170,29 +193,29 @@ case class Pi() extends
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18769#discussion_r131534966
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala
---
@@ -87,6 +88,13 @@ case class SetCommand(kv: Option[(String
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
OK, I will try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18853
[SPARK-21646][SQL] BinaryComparison shouldn't auto cast string to int/long
## What changes were proposed in this pull request?
How to reproduce:
hive:
```:sql
$ hive -S
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18879
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r130526538
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1186,3 +1186,124 @@ case class BRound(child
1 - 100 of 974 matches
Mail list logo