Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/3247#discussion_r27782127
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregates.scala
---
@@ -17,285 +17,159 @@
package
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5178#discussion_r27781922
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/FileShuffleBlockManager.scala ---
@@ -180,7 +180,8 @@ class FileShuffleBlockManager(conf: SparkConf
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/5395
[SPARK-6747][SQL] Support List as a return type in Hive UDF
This patch supports List as a return type in Hive UDF.
We assume an UDF below;
public class UDFToListString extends UDF
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5395#discussion_r27938425
--- Diff:
sql/hive/src/test/java/org/apache/spark/sql/hive/execution/UDFToListString.java
---
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/5395#issuecomment-90784532
Ok, I will look into the implementation and the documentation of Hive for
that.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/5178#issuecomment-89735823
Understood.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5383#discussion_r27995414
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Generate.scala ---
@@ -74,10 +84,15 @@ case class Generate(
} else
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/5383#issuecomment-91172886
I found one issue; the current implementation of HiveGenericUdtf always
calls `terminate()` though, it does not call `initialize()` in some cases
because of lazy
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/3782
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/4402
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6179#discussion_r31096485
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUdfSuite.scala
---
@@ -133,6 +134,41 @@ class HiveUdfSuite extends QueryTest
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/6179
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7248#issuecomment-119040198
@marmbrus Ok and thanks.
After this patch merged, I'll make a same patch for Map because it has
the same issue.
---
If your project is set up for it, you can reply
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7248#issuecomment-119036207
@marmbrus Through the discussion of #5395, I think it is hard to support
java List types in SparkSQL because of type erasure. ISTM that if udf
developers use this type
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7248#discussion_r34000400
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDFSuite.scala
---
@@ -133,6 +133,32 @@ class HiveUDFSuite extends QueryTest
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/7248
[SPARK-6747] [SQL] Throw an AnalysisException when unsupported Java list
types used in Hive UDF
The current implementation can't handle List as a return type in Hive UDF
and
throws meaningless
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/6179#issuecomment-104236074
@marmbrus please merge it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/7257
[SPARK-6912][SQL] Throw an AnalysisException when unsupported Java MapK,V
types used in Hive UDF
To make UDF developers understood, throw an exception when unsupported
MapK,V types used in Hive UDF
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7257#issuecomment-119110618
@marmbrus plz review it, thx.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/8122#issuecomment-132992396
cc: @rxin it's a just remainder.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7676#issuecomment-127459623
thanks, I'll fix it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7676#issuecomment-127482382
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7676#issuecomment-127638905
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/7305
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7677#discussion_r35736333
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/utils.scala
---
@@ -0,0 +1,167 @@
+/*
+ * Licensed
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7677#issuecomment-125878935
This pr is reasonable to me though, what's actual performance differences
when this patch applied?
---
If your project is set up for it, you can reply to this email
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7474#discussion_r35733516
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -1365,6 +1365,20 @@ class DataFrame private[sql](
def foreachPartition(f
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7474#discussion_r35733198
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -1365,6 +1365,20 @@ class DataFrame private[sql](
def foreachPartition(f
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7641#discussion_r35731817
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringOperations.scala
---
@@ -699,8 +732,12 @@ case class Substring(str
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7641#issuecomment-125856142
ISTM that `Concat` also needs to support binary types according to
(Hive)[https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF].
So, how about
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7744#discussion_r35745491
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeRow.java
---
@@ -92,7 +92,8 @@ public static int
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7744#issuecomment-125905838
We must change these getter/setter interfaces?
ISTM that `setDecimal(i: ordinal, value: Decimal)` serializes the value
with the precision/scale that the input decimal
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7676#discussion_r36275502
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SortOrder.scala
---
@@ -76,6 +78,7 @@ case class SortPrefix(child: SortOrder
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7676#discussion_r36275517
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/PrefixComparators.java
---
@@ -52,6 +59,38 @@ public int compare(long bPrefix, long
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7676#issuecomment-127898753
@davies @rxin ok, all the comments applied.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7676#discussion_r36260211
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/PrefixComparators.java
---
@@ -52,6 +59,38 @@ public int compare(long bPrefix, long
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7676#issuecomment-127815853
@rxin The tests failed though, ISTM the ``hive-thriftserver`` tests are
not related to this pr.
This failure is correct, or not?
---
If your project is set up
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7641#discussion_r35841474
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringOperations.scala
---
@@ -699,8 +732,12 @@ case class Substring(str
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/8098
[SPARK-9816][SQL] Support BinaryType in Concat
Support BinaryType in catalyst Concat according to hive behaviours.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/8099
[SPARK-9816][SQL] Support BinaryType in Concat
Support BinaryType in catalyst Concat according to hive behaviours.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/8098
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/8068#discussion_r36711245
--- Diff: core/src/main/java/org/apache/spark/util/collection/TimSort.java
---
@@ -914,7 +915,7 @@ private void mergeHi(int base1, int len1, int base2,
int
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/8122#discussion_r36993560
--- Diff: unsafe/src/main/java/org/apache/spark/unsafe/types/ByteArray.java
---
@@ -29,4 +31,45 @@
public static void writeToMemory(byte[] src, Object
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/8122#issuecomment-131117757
If no problem, could you merge this? cc: @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/8099
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/8122
[SPARK-9867][SQL] Move utilities for binary data into ByteArray
The utilities such as Substring#substringBinarySQL and
BinaryPrefixComparator#computePrefix for binary data are put together
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/8122#issuecomment-130251509
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/8122#issuecomment-130300499
@rxin Could you review this?
The unit tests failed though, ISTM these failures also happen in master.
---
If your project is set up for it, you can reply
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7305#issuecomment-124343855
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/7676
[SPARK-9360][SQL] Support BinaryType in PrefixComparators for
UnsafeExternalSort
The current implementation of UnsafeExternalSort uses NoOpPrefixComparator
for binary-typed data.
So, we need
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7259#issuecomment-124571393
This patch's very useful for users :))
I leave some comments;
I think we can add a function name together in ExpressionDescription
according to Hive `Description
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7259#issuecomment-125067194
Ok, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7606#discussion_r35290606
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1720,4 +1720,11 @@ object functions {
UnresolvedFunction(udfName
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7392#discussion_r34590505
--- Diff: core/src/main/scala/org/apache/spark/TaskContext.scala ---
@@ -32,7 +32,13 @@ object TaskContext {
*/
def get(): TaskContext
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7392#discussion_r34591393
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -56,6 +56,16 @@ class CodeGenContext
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7324#discussion_r34524246
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -501,7 +501,15 @@ private[hive] case class HiveGenericUDTF(
protected
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7324#issuecomment-121097682
@marmbrus Could you check and merge this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4650#issuecomment-121105412
@emir-munoz @blankdots This PR is totally stale, so it'd better to refactor
ithis if you're interested in.
Also, ISTM this kind of loader extensions should
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/5549
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/5067#issuecomment-123231061
Ok, I'll close it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/5067
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/4244
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7305#issuecomment-121659767
I understood that Hive ``explode`` only has a single expression though,
should we apply the same limitation into ``UserDefinedGenerator`` used in the
DataFrame#explode
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7324#issuecomment-121606700
@marmbrus This is just a reminder.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7418#issuecomment-121654917
This patch seems to be duplicated with #7076.
Why you make a new PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7418#discussion_r34692458
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateMutableProjection.scala
---
@@ -45,7 +45,28 @@ object
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7324#issuecomment-121645978
Ok, I'll close this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/7324
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4399#issuecomment-152342718
Ok, I'll fix it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/9137#discussion_r43347874
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -171,21 +187,9 @@ object JdbcUtils extends Logging
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/9137#discussion_r43347707
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -121,6 +122,21 @@ object JdbcUtils extends Logging
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/9137#issuecomment-152073314
Great work! I left some review comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/9137#discussion_r43348162
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -207,6 +225,25 @@ case object PostgresDialect extends JdbcDialect
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/9137#discussion_r43347951
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -72,7 +72,7 @@ abstract class JdbcDialect {
* or null
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/9350
[SPARK-11394][SQL] Throw IllegalArgumentException for unsupported types in
postgresql
If DataFrame has BYTE types, throws an exception:
org.postgresql.util.PSQLException: ERROR: type "byte&
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/8374#discussion_r43359470
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -278,3 +285,59 @@ case object MsSqlServerDialect extends JdbcDialect
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/8374#issuecomment-152111324
Great work! I left some trivial comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/9137#discussion_r43472350
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -207,6 +225,25 @@ case object PostgresDialect extends JdbcDialect
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/9478
[SPARK-6521][Core] Bypass unnecessary network access if block managers
share an identical host
Refactored #5178 and added unit tests.
You can merge this pull request into a Git repository
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/8374#discussion_r43833399
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -82,6 +82,14 @@ abstract class JdbcDialect {
def getJDBCType(dt
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/9478#issuecomment-154231492
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/9478#issuecomment-154231203
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/9478#issuecomment-154270938
@andrewor14 Could you review this and give some suggestions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4399#issuecomment-152381853
@ankurdave @andrewor14 Fixed and could you merge this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/7324
[SPARK-8955][SQL] Replace a duplicated initialize() in HiveGenericUDTF with
new one
HiveGenericUDTF#initialize(ObjectInspector[] argOIs) in v0.13.1 is
duplicated, so it needs to be replaced
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7076#issuecomment-120247243
This fix is kind of hack things to me.
It'd be better to check the code size and, if it is over 64KB (the janino
limitation), throw an exception
to fall back
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7076#issuecomment-120623834
Yes, and falling back into normal expressions turns off unsafe optimization.
I feel concerned that this fix is less meaningful for most users
because
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/7305
[SPARK-8930][SQL] Support a star '*' in generator function arguments
The current implementation throws an exception if generators contain a star
'*' like codes blow;
val df = Seq((1, 1,2), (2, 4
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7305#issuecomment-120815118
@chenghao-intel So I'll fix this PR to throw an AnalysisException if
Generate has '*'.
Is it ok?
---
If your project is set up for it, you can reply to this email
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/7305#issuecomment-120816055
@chenghao-intel Ok, thanks.
Is the limitation only applied to ``Explode`` and I mean that can other
generator functions have multiple expressions?
Anyway, I'll
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/8534#issuecomment-161548934
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/10346
[SPARK-12392] Optimize a location order of broadcast blocks by considering
preferred local hosts
When multiple workers exist in a host, we can bypass unnecessary remote
access for broadcasts
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4399#issuecomment-165357980
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/10346#issuecomment-165349453
I did quick benchmarks for large broadcasts;
- aws m4.x4large x 4, 4 works in a host
- elapsed time:
-- w/opt.: 6.887943434s, w/o opt.: 11.738593435s
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4674#issuecomment-165358131
@andrewor14 @ankurdave Fixed. Also, could you merge #4399?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4674#issuecomment-165354761
@andrewor14 okay
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/10346#issuecomment-165705964
@andrewor14 Could you review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-168651319
@yuhai Oh... my bad :(( Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-168618638
@yuhai Could you review it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/10521#issuecomment-168618986
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
101 - 200 of 3605 matches
Mail list logo