Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r60558201
--- Diff:
yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala ---
@@ -85,6 +87,7 @@ class YarnAllocatorSuite extends SparkFunSuite with
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r60545824
--- Diff:
yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala ---
@@ -85,6 +87,7 @@ class YarnAllocatorSuite extends SparkFunSuite with
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12437#discussion_r60261647
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -72,6 +72,22 @@ private[ui] class RDDOperationCluster(val id: String
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-211904423
@srowen Thanks. All good
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-211803125
[error] (streaming-flume-sink/*:mimaFindBinaryIssues)
java.lang.ArrayIndexOutOfBoundsException: 1497
```
Jenkins retest this please
---
If your
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-211617734
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12401#issuecomment-211487546
Cheers @andrewor14 for the clarification. I will do a PR for that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r60095152
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/CustomShuffledRDD.scala ---
@@ -53,9 +53,12 @@ class CoalescedPartitioner(val parent: Partitioner
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r60095025
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/Split.scala ---
@@ -112,12 +114,15 @@ final class CategoricalSplit private[ml
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r60094956
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/FixedHashObject.scala ---
@@ -22,4 +22,8 @@ package org.apache.spark.util.collection
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-211468109
Are we good with that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12401#issuecomment-211466903
@andrewor14 Sorry I've been confused by the `::DeveloperApi::` in comments.
Should we remove `::DeveloperApi::` from the comment then?
---
If your project is s
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12437#discussion_r60091207
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -72,6 +72,22 @@ private[ui] class RDDOperationCluster(val id: String
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-210817071
@srowen Do you see what makes the MiMa tests fail?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r59789301
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala ---
@@ -115,7 +119,9 @@ private[sql] class PythonUserDefinedType
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-210127946
Damn, since I opened the PR we had 12 more implementations of
hashCode/equals not in pair!
---
If your project is set up for it, you can reply to this email and have
GitHub user joan38 opened a pull request:
https://github.com/apache/spark/pull/12401
[SPARK-14640] Add @DeveloperApi on PythonUserDefinedType
## What changes were proposed in this pull request?
Add @DeveloperApi on PythonUserDefinedType
You can merge this pull request into
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r59771079
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -169,6 +170,8 @@ case class Literal protected (value
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r59571678
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -169,6 +169,11 @@ case class Literal protected (value
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-209110346
Sure, I was busy with another PR.
Do you want to give up on all Partition subtypes also or this is good as
per commit 45e816a ?
---
If your project is set up for
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12149#issuecomment-208783120
@marmbrus I think now we have a end to end test for both RDDs and Datasets.
Is that all good for you?
---
If your project is set up for it, you can reply to this
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58728708
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -53,14 +53,22 @@ import org.apache.spark.util.{NextIterator
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58634718
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -53,14 +53,22 @@ import org.apache.spark.util.{NextIterator
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12149#issuecomment-206038639
The upcoming test *UserDefinedTypeSuite.UDTs with JSON and Dataset* is
going to fail.
Can you confirm that this is due to the lack of support of UDTs in Datasets
or
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12149#issuecomment-206038019
The upcoming test *UserDefinedTypeSuite.UDTs with JSON and Dataset* is
going to fail.
Can you confirm that this is due to the lack of support of UDTs in Datasets
or
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12149#discussion_r58625023
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/ScalaReflectionSuite.scala
---
@@ -81,9 +81,43 @@ case class MultipleConstructorsData(a
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12149#discussion_r58621915
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/ScalaReflectionSuite.scala
---
@@ -81,9 +81,43 @@ case class MultipleConstructorsData(a
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58542253
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -53,14 +53,22 @@ import org.apache.spark.util.{NextIterator
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58514424
--- Diff: core/src/main/scala/org/apache/spark/Partition.scala ---
@@ -26,6 +26,11 @@ trait Partition extends Serializable {
*/
def index: Int
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12157#issuecomment-205737385
Not yet. I wanted to have some thoughts first before I bother implementing
the wrong way everywhere.
I will push a new version soon with your comments and more (if
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58511910
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
---
@@ -590,6 +590,11 @@ class SparseMatrix @Since("1.3.0") (
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58511925
--- Diff: core/src/main/scala/org/apache/spark/rdd/CoGroupedRDD.scala ---
@@ -58,10 +58,20 @@ private[spark] case class NarrowCoGroupSplitDep
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12157#discussion_r58511863
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -53,14 +53,22 @@ import org.apache.spark.util.{NextIterator
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/12149#issuecomment-205529891
Make sense. Let's see what @cloud-fan and @marmbrus think then.
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
GitHub user joan38 opened a pull request:
https://github.com/apache/spark/pull/12157
[SPARK-6429] Implement hashCode and equals together
## What changes were proposed in this pull request?
Implement some hashCode and equals together in order to enable the
scalastyle
Github user joan38 closed the pull request at:
https://github.com/apache/spark/pull/11772
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11772#issuecomment-205322558
> I don't think this is a great fix, since it's just catching a general
exception (which could be triggered by more than just the case in question)
GitHub user joan38 opened a pull request:
https://github.com/apache/spark/pull/12149
[SPARK-13929] Use Scala reflection for UDFs
## What changes were proposed in this pull request?
Enable ScalaReflection and User Defined Types for plain Scala classes.
This involves
Github user joan38 closed the pull request at:
https://github.com/apache/spark/pull/11937
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11937#issuecomment-202596918
I've done some benchmark and the `Thread.sleep` I've added on
`SortShuffleManager.getBlockData` actually doesn't affect the `fetchWaitTime`.
So this cha
GitHub user joan38 opened a pull request:
https://github.com/apache/spark/pull/11937
[SPARK-2208] Fix for local metrics tests can fail on fast machines
## What changes were proposed in this pull request?
A fix for local metrics tests that can fail on fast machines.
This
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-200933433
Sorry for the mess I introduced.
I will make a new PR with 100ms once it's reverted and this time we might
want to run multiple times Jenkins to be sure th
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-200916091
Yes, it looks like our change did not resolve this issue.
Maybe we should go for more than just 10ms. 100ms?
---
If your project is set up for it, you can reply to
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-199567055
All good, this is ready to merge I guess.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-199238328
Ahah, I knew it!
I will tidy up this and push again with only 10ms.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-199181894
I'm thinking that maybe this:
```scala
// just to make sure some of the tasks take a noticeable amount of
time
val w = { i
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-199073837
Failed for 10ms, 25ms, 50ms. So trying with 75ms as it passes for 100ms.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11747#discussion_r56767638
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala ---
@@ -255,21 +262,19 @@ class SparkListenerSuite extends SparkFunSuite
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-198816120
Damn, it failed again! Should we try with 100ms?
I just pushed the 100ms version.
---
If your project is set up for it, you can reply to this email and have your
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11747#discussion_r56757462
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala ---
@@ -255,21 +262,19 @@ class SparkListenerSuite extends SparkFunSuite
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11772#discussion_r56422387
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -282,7 +282,10 @@ private[ml] object DefaultParamsReader
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11772#discussion_r56422280
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -282,7 +282,10 @@ private[ml] object DefaultParamsReader
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-197803004
Yeah I'm investigating but any ideas are welcome.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-198747886
Can we try to run Jenkins? I did the change for 10ms.
It anyway always pass on my PC as it's a slow machine.
---
If your project is set up for it, you can rep
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11747#discussion_r56406777
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala ---
@@ -476,3 +480,19 @@ private class ListenerThatAcceptsSparkConf(conf
Github user joan38 commented on the pull request:
https://github.com/apache/spark/pull/11747#issuecomment-198686538
I tried with 1000ms and the test runs forever.
Should we try 10ms? 100ms works too but the test runs for a minute...
---
If your project is set up for it, you can
GitHub user joan38 opened a pull request:
https://github.com/apache/spark/pull/11772
[SPARK-13815][MLlib] Provide better Exception messages in Pipeline load
methods
## What changes were proposed in this pull request?
Provides better Exception messages in Pipeline load
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11747#discussion_r56311617
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala ---
@@ -476,3 +480,19 @@ private class ListenerThatAcceptsSparkConf(conf
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11747#discussion_r56311161
--- Diff: core/src/test/scala/org/apache/spark/LocalSparkContext.scala ---
@@ -27,12 +27,12 @@ trait LocalSparkContext extends BeforeAndAfterEach with
Github user joan38 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11747#discussion_r56311022
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala ---
@@ -19,14 +19,17 @@ package org.apache.spark.scheduler
GitHub user joan38 opened a pull request:
https://github.com/apache/spark/pull/11747
[SPARK-2208] Fix for local metrics tests can fail on fast machines
## What changes were proposed in this pull request?
A fix for local metrics tests that can fail on fast machines
This
61 matches
Mail list logo