Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-218855824
Yeah sure @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user xguo27 closed the pull request at:
https://github.com/apache/spark/pull/9553
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user xguo27 closed the pull request at:
https://github.com/apache/spark/pull/10437
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user xguo27 closed the pull request at:
https://github.com/apache/spark/pull/10935
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10935#issuecomment-204840629
Sure @davies . I will close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10935#issuecomment-190554648
Using these two functionally equavalent code snippets:
Scala
```
val data = Seq((1, "1"), (2, "2"), (3, "2"), (1, "3
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10935#issuecomment-189430834
@rxin Does this fix look good to you?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11291#issuecomment-186923232
@hvanhovell I just rebased with your new PR, do you mind reviewing again?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11291#issuecomment-186893918
@hvanhovell In hashSemiJoin() function, when condition is empty, the
boundCondition always evaluates to true here:
https://github.com/apache/spark/blob
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11291#issuecomment-186878372
@hvanhovell I see, sorry for my lack of patience. : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11291#issuecomment-186876120
Looks like the command did not trigger a test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11291#issuecomment-186872660
@hvanhovell Could you please advise whether this is the right fix? All Left
Semi related tests passed, but I'm not sure what other impact there might be to
r
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/11291
[SPARK-13422][SQL] Use HashedRelation instead of HashSet in Left Semi Joins
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11244#issuecomment-186733828
Thanks @marmbrus ! I have updated the change following your suggestion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user xguo27 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11244#discussion_r53263079
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -680,6 +681,14 @@ class Dataset[T] private[sql](
joinWith(other
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/11244
[SPARK-13366] Support Cartesian join for Datasets
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK-13366
Alternatively you
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/11224#issuecomment-184916252
Yes @JoshRosen , you are referring to integration test, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/11224
[SPARK-13283][SQL] Escape column names based on JdbcDialect
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK-13283
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10935
[SPARK-12981][SQL] Fix Python UDF extraction for aggregation.
When Aggregate operator being applied ExtractPythonUDFs rule, it becomes a
Project. This change fixes that and maintain Aggregate
Github user xguo27 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10515#discussion_r48788553
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/text/DefaultSource.scala
---
@@ -70,15 +70,16 @@ class DefaultSource extends
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10515#issuecomment-168564909
@marmbrus Can we trigger a test for this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user xguo27 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10515#discussion_r48591393
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/text/TextSuite.scala
---
@@ -33,8 +33,8 @@ class TextSuite extends QueryTest
Github user xguo27 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10515#discussion_r48590105
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/text/TextSuite.scala
---
@@ -58,6 +58,17 @@ class TextSuite extends QueryTest
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10515#issuecomment-167950674
Thanks @viirya ! I have updated the comment and added unit test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10515#issuecomment-167915743
@marmbrus Thanks Michael for your feedback!
Looks like the 'value' is to give the single string column a arbitrary
name. Current implementation str
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10515
[SPARK-12562][SQL] DataFrame.write.format(text) requires the column name to
be called value
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10500
[SPARK-12512][SQL] support column name with dot in withColumn()
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK-12512
Github user xguo27 closed the pull request at:
https://github.com/apache/spark/pull/10473
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10473#issuecomment-167258976
Thanks @hvanhovell for clarifying it up. I will close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10473#issuecomment-167190872
Marking it [WIP] to invite discussion here. : ) As I suspect the original
code includes infinity on both smaller than side and greater than side for a
reason.
---
If
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10473
[SPARK-12521][SQL][WIP] JDBCRelation does not honor lowerBound/upperBound
JDBCRelation is not bounding the rows when lowerBound/upperBound are given.
This change honors the bounds given.
You can
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10437
[SPARK-12462][SQL] Add ExpressionDescription to misc non-aggregate functions
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10423#issuecomment-166701737
@rxin Great, thanks Reynold! My JIRA id is xguo27.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/10423#issuecomment-166531147
@rxin Thank you very much for go through the changeset, Reynold! I have
updated it per your suggestions.
---
If your project is set up for it, you can reply to this
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10423
[SPARK-12456][SQL] Add ExpressionDescription to misc functions
First try, not sure how much information we need to provide in the usage
part.
You can merge this pull request into a Git repository
Github user xguo27 commented on a diff in the pull request:
https://github.com/apache/spark/pull/9553#discussion_r48095871
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -1026,17 +1027,30 @@ class SparkILoop(
@DeveloperApi
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-165948481
@yhuai I just resolved the conflict. Can we trigger a test? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-162964945
Hi @yhuai, do you think this is good to merge?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-160782796
Looks like some git plugin network issue?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-160779029
Hi @yhuai @liancheng:
As I was hitting SPARK-2 when testing my code, I rebased my branch and
squashed my previous commits together. Now the new commit
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9543#issuecomment-160482218
Thanks @yhuai for reviewing my code! I have updated per your suggestion.
To answer your question, I personally do not have a use case for this. My
take on the
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9543#issuecomment-160332622
@yhuai I see your latest delivery has conflict with this PR, I have
resolved the conflict and re-pushed. @rxin has been reviewing this PR, I figure
you might also want
Github user xguo27 closed the pull request at:
https://github.com/apache/spark/pull/9603
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9603#issuecomment-159191082
OK, I will close it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/9918
[SPARK-11897][SQL] Add @scala.annotations.varargs to sql functions
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK-11897
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9603#issuecomment-159079918
@andrewor14 What is your take on Jacek's comment? I don't think it's a bad
idea to make it more consistent with a matching log message. Please le
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9543#issuecomment-158694089
Sorry about the failure, can we re-test please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9612#issuecomment-158678513
@cloud-fan I have added a few tests per your suggestion. Do they look good
to you?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9543#issuecomment-158678073
@rxin Thanks, Reynold! Somehow no test was triggered. Not sure why.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-158249118
@marmbrus @rxin Does this look good to you guys?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9543#issuecomment-158248923
@marmbrus @rxin What do you think about this change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9603#issuecomment-157219217
I agree it is trivial, just thought I could quickly add a log statement. If
Jacek agrees, I can close those PR.
---
If your project is set up for it, you can reply to
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9612#issuecomment-156276620
Hi Wenchen:
Can you elaborate on using ByteType for char a little more?
Ultimately, the difference between char(x) and varchar(x) is the
fixed/variable
Github user xguo27 commented on a diff in the pull request:
https://github.com/apache/spark/pull/9603#discussion_r44728655
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -506,6 +506,7 @@ class SparkContext(config: SparkConf) extends Logging
with
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/9612
[SPARK-11628][SQL][WIP] support column datatype of Char
Can someone review my code to make sure I'm not missing anything? Thanks!
You can merge this pull request into a Git repository by ru
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/9603
[SPARK-11631][Scheduler] Adding 'Starting DAGScheduler' log
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark S
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-155257787
Hi Zhan:
I just updated documentation and added a guard in the code regarding your
feedback on the exception handler.
Thanks!
---
If your project is
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9543#issuecomment-155151066
Thanks WangTao for your comment!
Based on the comment on my other PR for Spark-11562, I will also add
documentation for this.
---
If your project is set up for
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/9553
[SPARK-11562][SQL] Provide user an option to init SQLContext or HiveContext
in spark shell
Introducing a boolean property 'spark.sql.hive.context' to turn HiveContext
on and off as t
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/9543
[SPARK-11482][SQL] Make maven repo for Hive metastore jars configurable
Introducing a property called "spark.sql.hive.maven.repo" to let user
configure the maven repository to dow
Github user xguo27 commented on the pull request:
https://github.com/apache/spark/pull/9201#issuecomment-150054928
Right, let me change that too. Thx Sean!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/9201
[SPARK-11242][SQL] In conf/spark-env.sh.template SPARK_DRIVER_MEMORY is
documented incorrectly
Minor fix on the comment
You can merge this pull request into a Git repository by running:
$ git
62 matches
Mail list logo