Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-211508085
Thank you so much, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12376
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-211495835
Merging this into master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-21142
Until now, I cannot find any reason to pursuit other `bround`
implementation.
In addition, I think the current implementation of this PR is good for
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210780699
Hi, @marmbrus , @davies , @markhamstra .
I'm not sure what to do next for this PR. If I am supposed to do something,
please let me know.
---
If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210558519
Sure. In this year, Spark seems to able to remove Hive code. I agree that
Spark is better than Hive and we can do more. But, in terms of Hive
compatibility,
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210552825
@dongjoon-hyun Following Hive's lead is definitely one option. I don't
know whether it is the right option or whether any strategic decision has been
made about
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210550801
By the way, is Spark heading to some SQL Standard?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210550372
The above is my opinion about your second question.
For the first question, we already got three +1 for adding `bround`.
(including yours, thank you.)
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210549260
Hi, @markhamstra .
If we choose one of `bround` variances, I think we had better choose the
one in Hive.
How do you think about that?
---
If your
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59777405
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale:
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59776934
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale:
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59776011
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale:
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210109615
Thank you, @marmbrus , @davies , @markhamstra .
@markhamstra . I really appreciate your attention and understand what your
concern here correctly, but don't
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59774717
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale:
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59774065
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale:
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210103610
Yes, I am also +1 for a native implementation over using Hive, but my
question is more whether we want `bround` in Spark's SQL dialect or whether
there is another
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210101241
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210100953
+1 to native implementations of hive udfs so we can continue to minimize
our dependence.
---
If your project is set up for it, you can reply to this email and have
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59772070
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale: Int):
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210086456
Hi, @davies .
Could you review this PR, please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
21 matches
Mail list logo