Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6585#issuecomment-108064219
Can you add a test with a join please ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6585#issuecomment-108063274
@rxin @marmbrus can you please review this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6590#issuecomment-108052390
Thanks for taking over :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6580#issuecomment-107897875
Build seems to have failed because of something else...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/6580
[SPARK-8038][SQL][PySpark] Fix when function of PySpark SQL Column
definition
The chaining of when functions was broken in PySpark SQL.
This fix is only changing the Column.py file to fix the
Github user ogirardot closed the pull request at:
https://github.com/apache/spark/pull/6237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6237#issuecomment-103615700
ok - :-/
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5408#issuecomment-103561406
can you rebase this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6237#issuecomment-103382692
@rxin any input ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6237#issuecomment-103381292
I don't really understand, the Array is empty not filled with nulls
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6237#issuecomment-103349381
@marmbrus I may misunderstand the nullable flag, but I can have an empty
dataset with a non-nullable column.
For example :
```
scala> va
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6237#issuecomment-103192882
Seems to fail for something completely different :
```
[error]
/home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/main/scala/org/apache/spark/sql
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6237#issuecomment-103189133
added the license to the test file
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/6237
[SPARK-7696][SQL] Aggregate function's result should be nullable only if
the input expression is nullable
The following functions are now nullable or not according to their child
expres
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/6104#issuecomment-103172055
Is it possible to merge this for 1.4 ? I'd really need this :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-99893985
@pwendell do you mind launching the jenkins test, please ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-99813730
test is now fixed and using its own test variables
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-99764986
can you launch the test you and I'll change the test accordingly ?
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-99762690
rebased properly :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/4957#issuecomment-99439498
As we talked about during Strata, you should rebase your work in order for
the Pull Request to work and then one of the admins will relaunch the jenkins
tests
---
If
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-96767576
Jenkins, test this please. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-96157353
is this the job now ?
https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/
triggered by https://spark-prs.appspot.com/ ?
---
If your project is
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5698#issuecomment-96157282
Somehow Jenkins doesn't seem to be build this PR :
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/
---
If your project is set up for it, yo
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/5698
[SPARK-7118] [Python] Add the coalesce Spark SQL function available in
PySpark
This patch adds a proxy call from PySpark to the Spark SQL coalesce
function and this patch comes out of a
Github user ogirardot closed the pull request at:
https://github.com/apache/spark/pull/5683
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5683#issuecomment-96145809
Jenkins, pretty please :) ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/5683
[SPARK-7118] [Python] Add the coalesce Spark SQL function available in
PySpark
This patch adds a proxy call from PySpark to the Spark SQL coalesce
function and this patch comes out of a
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5571#issuecomment-94190848
of course, have a nice weekend :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5571#issuecomment-94180136
I pushed the modifications,
The third time is always the winning one ! :)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ogirardot commented on a diff in the pull request:
https://github.com/apache/spark/pull/5571#discussion_r28644734
--- Diff: core/src/test/java/org/apache/spark/JavaAPISuite.java ---
@@ -762,6 +762,20 @@ public void min() {
}
@Test
+ public void
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5571#issuecomment-94171708
Seems like the precision for float is needed in the test.
I'll add it back
---
If your project is set up for it, you can reply to this email and have your
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5571#issuecomment-94146576
I've taken your reviews into account.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user ogirardot commented on a diff in the pull request:
https://github.com/apache/spark/pull/5571#discussion_r28642640
--- Diff: core/src/test/java/org/apache/spark/JavaAPISuite.java ---
@@ -762,6 +762,20 @@ public void min() {
}
@Test
+ public void
Github user ogirardot commented on a diff in the pull request:
https://github.com/apache/spark/pull/5571#discussion_r28642638
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaDoubleRDD.scala
---
@@ -164,6 +166,20 @@ class JavaDoubleRDD(val srdd: RDD[scala.Double
Github user ogirardot commented on the pull request:
https://github.com/apache/spark/pull/5571#issuecomment-94146264
I removed the bad import - it was a typo
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/5571
SPARK-6993 : Add default min, max methods for JavaDoubleRDD
The default method will use Guava's Ordering instead of
java.util.Comparator.naturalOrder() because it's not available
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/5569
SPARK-6992 : Fix documentation example for Spark SQL on StructType
This patch is fixing the Java examples for Spark SQL when defining
programmatically a Schema and mapping Rows.
You can merge
GitHub user ogirardot opened a pull request:
https://github.com/apache/spark/pull/5564
SPARK-6988 : Fix documentation regarding DataFrames using the Java API
This patch includes :
* adding how to use map after an sql query using javaRDD
* fixing the first few java examples
38 matches
Mail list logo