Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/22194
@ueshin LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/22031#discussion_r210452329
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/higherOrderFunctions.scala
---
@@ -442,3 +442,91 @@ case class
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/22031
Hi @ueshin I will update the PR tommorow
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/22031
[TODO][SPARK-23932][SQL] Higher order function zip_with
## What changes were proposed in this pull request?
Merges the two given arrays, element-wise, into a single array using
function. If
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@HyukjinKwon didn't have bandwidth will try to finish this weekend
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15831
@HyukjinKwon was busy, will restart this week.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15831
@sethah I will revive this pr thanks ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15831
@MLnick I will create a umbrella jira and start adding jira's for things
I'm aware of of and you can start prioritising ð sounds like a plan ?
---
If your project is set up for i
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15831
@sethah @yanboliang I've started with migrating `IDF`, can you review the
WIP and if i'm going in the right direction
https://github.com/techaddict/spark/pull/2/files
there is
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/16101
[WIP] Migrate IDF to not used mllib
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/techaddict/spark migrate-idf
Alternatively you can
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/16101
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15843
@jkbradley @holdenk @viirya PR updated
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15843
@jkbradley @holdenk will update the PR with changes today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15817
ping @davies @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15831
@sethah I agree, 2nd approach is much more reasonable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15817
@jkbradley done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15843
@holdenk updated the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/15817#discussion_r87621123
--- Diff: python/pyspark/ml/feature.py ---
@@ -1163,9 +1184,11 @@ class QuantileDiscretizer(JavaEstimator,
HasInputCol, HasOutputCol, JavaMLReadab
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15817
@MLnick thanks for the review, addressed your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/15817#discussion_r87593705
--- Diff: python/pyspark/ml/feature.py ---
@@ -1194,21 +1217,30 @@ class QuantileDiscretizer(JavaEstimator,
HasInputCol, HasOutputCol, JavaMLReadab
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/15817#discussion_r87593693
--- Diff: python/pyspark/ml/feature.py ---
@@ -158,19 +158,26 @@ class Bucketizer(JavaTransformer, HasInputCol,
HasOutputCol, JavaMLReadable, Jav
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/15843#discussion_r87550799
--- Diff: python/pyspark/ml/wrapper.py ---
@@ -33,6 +33,10 @@ def __init__(self, java_obj=None):
super(JavaWrapper, self).__init__
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15843
@jkbradley looks good, merged ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15843
@jkbradley yes I did it for `JavaWrapper` first, but try running tests with
it gives
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68478/consoleFull
---
If your project is
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15843
cc: @jkbradley @davies @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15817
cc: @sethah @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/15843
[SPARK-18274] Memory leak in PySpark StringIndexer
## What changes were proposed in this pull request?
Make Java Gateway dereference object in destructor, using
`SparkContext
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15831
cc: @dbtsai @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/15831
[SPARK-18385][ML] Make the transformer's natively in ml framework to avoid
extra conversion
## What changes were proposed in this pull request?
Transformer's added in ml framewor
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/15817
[SPARK-18366][PYSPARK] Add handleInvalid to Pyspark for QuantileDiscretizer
and Bucketizer
## What changes were proposed in this pull request?
added the new handleInvalid param for these
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15809
@srowen done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/15809
[SPARK-18268] ALS.run fail with better message if ratings is empty rdd
## What changes were proposed in this pull request?
ALS.run fail with better message if ratings is empty rdd
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15654
@mgummelt yes working on it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15654
@mgummelt done! ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15654
cc: @mgummelt @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/15654
[SPARK-16881][MESOS] Migrate Mesos configs to use ConfigEntry
## What changes were proposed in this pull request?
Migrate Mesos configs to use ConfigEntry
## How was this patch
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/15433
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15433
closing this since, its maybe not the right way to do this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/12913
@rxin can you review again, all comments addressed ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12913#discussion_r84570678
--- Diff:
core/src/test/scala/org/apache/spark/serializer/UnsafeKryoSerializerSuite.scala
---
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/12913
@mateiz updated ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/12913
@mateiz updated the pr ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/15433
@shivaram @srowen not sure why its failing, will try to fix this ASAP.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user techaddict reopened a pull request:
https://github.com/apache/spark/pull/15433
[SPARK-17822] Use weak reference in JVMObjectTracker.objMap because it may
leak JVM objects
## What changes were proposed in this pull request?
Use weak reference in
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/15433
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/15433
[SPARK-17822] Use weak reference in JVMObjectTracker.objMap because it may
leak JVM objects
## What changes were proposed in this pull request?
Use weak reference in JVMObjectTracker.objMap
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13334
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13767
@srowen yes, the issue is still there.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user techaddict reopened a pull request:
https://github.com/apache/spark/pull/13767
[MINOR][SQL] Not dropping all necessary tables
## What changes were proposed in this pull request?
was not dropping table `parquet_t3`
## How was this patch tested?
tested
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/13767
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14924
@srowen yes in stringExpressions the trim is on with UTF8String.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14924
@rxin Done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/14924
[SPARK-17299] TRIM/LTRIM/RTRIM should not strips characters other than
spaces
## What changes were proposed in this pull request?
TRIM/LTRIM/RTRIM should not strips characters other than
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/12913
@holdenk Updated the PR, ready for review again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/12913#discussion_r73454617
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -399,6 +399,14 @@ class KryoSerializerSuite extends
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/11105#discussion_r73330940
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -220,8 +220,27 @@ class TaskMetrics private[spark] () extends
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/11105#discussion_r73330741
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -220,8 +220,27 @@ class TaskMetrics private[spark] () extends
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13334
@andrewor14 ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14315
@jaceklaskowski thanks for finding this out. Its weird it passed locally
too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@yhuai sure, doing performance testing using sql query or expression ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
@cloud-fan Comment addressed, test passed ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
@cloud-fan np, ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
@cloud-fan anything else, it good to merge ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70639783
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -234,6 +234,7 @@ object FunctionRegistry
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@cloud-fan Done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@cloud-fan Updated the PR, all tests should pass now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70563932
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -249,11 +241,12 @@ case class Divide(left
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r70562875
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala ---
@@ -384,4 +384,39 @@ class StringFunctionsSuite extends QueryTest
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
@cloud-fan all comments addressed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r70559727
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +394,54 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70471772
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -237,6 +229,9 @@ case class Divide(left
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70437720
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -237,6 +229,9 @@ case class Divide(left
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r70434309
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +394,84 @@ case class
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
@rxin no need, I will update this today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@rxin @cloud-fan done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69772459
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -441,10 +452,15 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69676815
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,71 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69675997
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,71 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69675457
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,71 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69675325
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,71 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69670913
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,71 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69605110
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,73 @@ case class
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@cloud-fan addressed all your comments ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13334
@andrewor14 I've made the changes, can you take a look now ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user techaddict reopened a pull request:
https://github.com/apache/spark/pull/13334
[SPARK-15576] Add back hive tests blacklisted by SPARK-15539
## What changes were proposed in this pull request?
Add back hive tests blacklisted by SPARK-15539
## How was this
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r69407742
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -285,6 +284,75 @@ case class Divide(left
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r69392406
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -285,6 +284,75 @@ case class Divide(left
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r69392402
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -234,6 +234,7 @@ object FunctionRegistry
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69392299
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,73 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69392113
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,73 @@ case class
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13990#discussion_r69392080
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -393,3 +393,73 @@ case class
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
cc: @cloud-fan @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
cc: @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/14036
[SPARK-16323] [SQL] Add IntegerDivide to avoid unnecessary cast
## What changes were proposed in this pull request?
Add IntegerDivide to avoid unnecessary cast
Before
Github user techaddict closed the pull request at:
https://github.com/apache/spark/pull/14032
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/14032
[Minor][SQL] Replace Parquet deprecations
## What changes were proposed in this pull request?
1. Replace `Binary.fromByteArray` with `Binary.fromReusedByteArray`
2. Replace
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13767
cc: @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13990
[SPARK-16287][SQL][WIP] Implement str_to_map SQL function
## What changes were proposed in this pull request?
This PR adds `str_to_map` SQL function in order to remove Hive fallback
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13767
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13767
[MINOR][SQL] Not dropping all necessary tables
## What changes were proposed in this pull request?
was not dropping table `parquet_t3`
## How was this patch tested?
tested
1 - 100 of 343 matches
Mail list logo