Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15618
Thank you for a good reference and explanation @mridulm . I will try to
handle it. (It seems I misunderstood the comments.)
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15618#discussion_r85104561
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala ---
@@ -239,7 +239,14 @@ private[spark] object ReliableCheckpointRDD
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15618#discussion_r85104721
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala ---
@@ -239,7 +239,14 @@ private[spark] object ReliableCheckpointRDD
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15618#discussion_r85109184
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala ---
@@ -239,7 +239,14 @@ private[spark] object ReliableCheckpointRDD
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85111739
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -25,7 +25,11 @@ import
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
I mostly ran it by myself when I was in doubt so I guess it'd be mostly
okay. At least, one major issue was identified above, so I will definitely look
into this closely again.
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85246919
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -251,7 +259,12 @@ case class InSet(child
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
I will take another look closely as suggested and then will let you all
know.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85254596
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/xml/xpath.scala
---
@@ -120,9 +168,19 @@ case class XPathFloat(xml
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85254582
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/xml/xpath.scala
---
@@ -107,9 +145,19 @@ case class XPathLong(xml
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
@marmbrus Thank you so much.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
Let me take a look into this deeper if some same tests constantly fail.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
I took another look and It seems generally fine. Could you take a look all?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
It seems the test is not related with this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15618
Could I please ask to take a look if the additional changes make sense?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15659
(Yes, it seems the particular test is really flicky)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15666
cc @felixcheung too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
I understand it might be good to mention (and I have thought for a while)
but I feel a bit hesitant because we could express json strings in dataframe
are stored without newline separators in a
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
Oh nvm. It seems it'd be better to mention.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
Thanks @gatorsmile. Just FYI, I would like to note the rule I used for
argument types (just to avoid extra efforts when you review).
As we all know, I did not mention implicit casting
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14788
Could I then go for option 1.?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14788
https://github.com/apache/spark/pull/14788#issuecomment-242906410 here is
my observation. It seems usually option 1 ir option 3. I can take a look deeper
if we want to follow this or be very
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14788
I will be back after testing/looking into other databases soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
Yes, I have. Could you point out an instance? I will fix them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625758
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/collect.scala
---
@@ -106,10 +110,14 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626660
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -144,7 +183,20 @@ case class Nvl(left
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85628075
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -401,22 +408,29 @@ case class Lead(input
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85627086
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -851,8 +993,16 @@ case class ParseUrl
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625180
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Last.scala
---
@@ -29,7 +29,16 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626717
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -282,7 +359,15 @@ case class IsNull(child
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626871
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -409,7 +427,12 @@ object Equality
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625786
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -471,7 +547,15 @@ case class Pmod(left
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85624768
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -114,8 +114,16 @@ object Cast {
/** Cast
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625485
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -82,7 +90,16 @@ case class CreateArray
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625546
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -175,7 +192,15 @@ case class CreateMap
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626365
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -106,7 +125,17 @@ case class IfNull(left
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85628048
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -372,22 +372,29 @@ abstract class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626890
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -435,8 +458,15 @@ case class EqualTo(left
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626702
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -261,7 +330,15 @@ case class NaNvl(left
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625901
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -24,7 +24,17 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625172
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/collect.scala
---
@@ -86,7 +86,11 @@ abstract class Collect
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626278
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -34,9 +34,18 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626558
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -490,7 +521,15 @@ abstract class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625732
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Min.scala
---
@@ -23,7 +23,11 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625801
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -531,7 +615,15 @@ case class Least(children
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626037
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -162,7 +172,15 @@ abstract class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85624830
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Count.scala
---
@@ -23,9 +23,17 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626369
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -88,7 +97,17 @@ case class Coalesce
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626603
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -126,7 +155,17 @@ case class NullIf(left
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625648
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -234,7 +259,16 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625439
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -28,7 +28,15 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626234
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -631,7 +682,11 @@ case class CurrentDatabase
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85625713
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Max.scala
---
@@ -23,7 +23,11 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85624958
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/HyperLogLogPlusPlus.scala
---
@@ -47,10 +47,16 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626188
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -102,8 +102,17 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85624910
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/First.scala
---
@@ -29,10 +29,16 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15513#discussion_r85626782
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -114,7 +118,11 @@ case class Not(child
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/15513
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15513
Let me close and reopen another. It is really messy.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15677
[SPARK-17963][SQL][Documentation] Add examples (extend) in each expression
and improve documentation with arguments
## What changes were proposed in this pull request?
This PR
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
cc @srowen, @rxin, @jodersky, @gatorsmile . I closed the previous one and
reopened it here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
@gatorsmile I double-checked the type ones again and tried to describe the
types more specifically. Could you please take another look?
---
If your project is set up for it, you can reply to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
BTW, I hope we can fix up all the minor comments together once as a final
look if each does not block each other.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15354#discussion_r85630778
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -494,3 +495,46 @@ case class JsonToStruct
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15618#discussion_r85630990
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala ---
@@ -209,7 +209,8 @@ class TaskResultGetterSuite extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15678
@wwjiang007 Could you please close this? it seems mistakenly opened.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13771
gentle ping @davies (If you are not sure of this change, I can close and
take an action for the JIRA).
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14627
(gentle ping @liancheng)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14660
(ping @liancheng ..)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15361
(gentle ping @chenghao-intel @davies)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15618
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15618
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
Hey @gatorsmile, I am not rushing to review and also I didn't mean to find
every one. I meant I can wait for all the reviews and then sweep it if they are
only minors and each does not
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15471
Build started: [SparkR] `ALL`
[![PR-15471](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=1175779D-A053-45AF-BC6C-EA34931CFC37&svg=true)](https://ci.appveyor
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
@gatorsmile OK, so, your concern is about potentially inaccurate types and
typos. If you are really worried, maybe I can split this PR into several ones.
---
If your project is set up for it
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15677#discussion_r85635839
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -148,8 +171,15 @@ case class Hour
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15618
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14788
I tested the functions related with `add`/`sub`/`trunc` on date/timestamp
and It seems generally option 2 or option 3.
**DB2 (`TRUNC`)**
- input: `TimestampType`, output
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
@gatorsmile, I guess the changes here are roughly correct and accurate and
make sense. This does not mean "not right" or "wrong".
For example, if users run `descri
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
Could I please ask your opinion @rxin ? I would rather simply follow the
majority.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15618
I tried to initialize it in `start()`. Could you take a look at
https://github.com/apache/spark/pull/15618/commits/49cb4e7f259ba0a236b0a977e69719cfc165c265
?
(If it does not looks nice, I
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15690
This is a rapid change that breaks the compatibility in existing codes. I
guess we should check other databases as well in this case.
---
If your project is set up for it, you can reply to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15690
cc @hvanhovell too. I guess we talked about the similar issue in another PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
> However, the complex type we should block. We also need to fix it in the
code.
It does not support complex types already but only `AtomicType`[1].
```
spark-sql>
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
I guess we can improve them in followups if a change does not make
something worse. This introduces a general format to follow too.
Can I just follow the majority if this can'
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15689
(Yes, I guess we should fix many tests too. As far as I remember, many
tests are dependent on the default partition number if I remember correctly.)
---
If your project is set up for it, you
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15686
Build started: [SparkR] `ALL`
[![PR-15686](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=1D4EC6E8-F2CF-4585-9745-0AE5956F211C&svg=true)](https://ci.appveyor
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15685
Hi @srowen, I have a couple of things you may consider adding.
9008
14699
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
My answer to that is
https://github.com/apache/spark/pull/15677#issuecomment-257137604. It seems the
discussion goes in a loop.
---
If your project is set up for it, you can reply to this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15686
@rxin, it seems spurious. The message seems meaning the failure when the
commit is virtually not mergeable[1].
It seems it fails time to time for various reasons. For example, in some
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15686
I believe the message indicates the same case with the PR 15673 but it
seems spurious in this case.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15686
There are three problems with it.
- it starts the build when someone merges the latest upstream (not rebases)
when pushing more commits in its PR (as the merged one usually has the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
I am fine and willing to follow the majority. We have anyway the same goal.
I hope the followup is open as soon as possible after this one though if we
decide to remove argument parts here
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/15694
[SPARK-18179][SQL] Throws analysis exception with a proper message for
unsupported argument types in reflect/java_method function
## What changes were proposed in this pull request
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15659
@nchammas Ah, yes it is. It is a known problem. AppVeyor triggers the build
when it has the changes in `R` directory but when it's merged (instead of
rebased) for example,
https://githu
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15694#discussion_r85848749
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/MiscFunctionsSuite.scala ---
@@ -31,6 +34,47 @@ class MiscFunctionsSuite extends QueryTest with
1401 - 1500 of 12634 matches
Mail list logo