Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43711769
I don't know, tests pass, and the expected behavior is observed, locally
for me.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43711659
Yeah, in other PRs as well. What happened to the port incrementing?!
On Tue, May 20, 2014 at 9:40 PM, andrewor14 wrote:
> Hm looks like this test is cons
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43711603
Hm looks like this test is consistently failing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43711404
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43711405
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15113/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43709012
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43709004
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43708843
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15112/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43708842
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43708811
you forgot the magic word
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43708803
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/395#issuecomment-43707969
Just mention it here, I have submitted another solution as #837
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/791#issuecomment-43706083
@mridulm Thanks very much for your comment! I think a big difference is:
earlier code call BlockManager#dropFromMemory within putLock, but now we call
it in parallel, we
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43706027
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43706033
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43705871
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43705813
Jenkins, test this again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12878644
--- Diff: docs/streaming-programming-guide.md ---
@@ -306,12 +304,16 @@ need to know to write your streaming applications.
## Linking
To write your
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12878638
--- Diff: docs/streaming-programming-guide.md ---
@@ -83,21 +82,21 @@ import org.apache.spark.streaming.api._
val ssc = new StreamingContext("local", "Network
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12878553
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -243,10 +250,13 @@ private class MemoryStore(blockManager: BlockManager,
maxM
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12877361
--- Diff: docs/streaming-programming-guide.md ---
@@ -355,21 +358,21 @@ object has to be created, which is the main entry
point of all Spark Streaming f
A `J
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12877362
--- Diff: docs/streaming-programming-guide.md ---
@@ -579,7 +582,7 @@ This is applied on a DStream containing words (say, the
`pairs` DStream containi
1)` pa
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12877300
--- Diff: docs/streaming-programming-guide.md ---
@@ -306,12 +305,16 @@ need to know to write your streaming applications.
## Linking
To write your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694156
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694158
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15111/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694027
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694015
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43693056
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15110/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43693054
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43692891
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43692883
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43689807
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
GitHub user smungee opened a pull request:
https://github.com/apache/spark/pull/843
[SPARK-1250] Fixed misleading comments in bin/pyspark, bin/spark-class
Fixed a couple of misleading comments in bin/pyspark and bin/spark-class.
The comments make it seem like the script is looking f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43689533
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15109/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43689531
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12870325
--- Diff: docs/streaming-programming-guide.md ---
@@ -306,12 +304,16 @@ need to know to write your streaming applications.
## Linking
To
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12870184
--- Diff: docs/streaming-programming-guide.md ---
@@ -105,23 +104,22 @@ generating multiple new records from each record in
the source DStream. In this
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12870111
--- Diff: docs/streaming-programming-guide.md ---
@@ -83,21 +82,21 @@ import org.apache.spark.streaming.api._
val ssc = new StreamingContext("local"
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12869294
--- Diff: docs/streaming-programming-guide.md ---
@@ -83,21 +82,21 @@ import org.apache.spark.streaming.api._
val ssc = new StreamingContext("local"
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43685020
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43685011
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/842
[Minor] Correct example of creating a new SparkConf
The example code on the configuration page currently does not compile.
You can merge this pull request into a Git repository by running:
$
Github user xiaocai00 commented on a diff in the pull request:
https://github.com/apache/spark/pull/734#discussion_r12868794
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins.scala ---
@@ -142,6 +136,68 @@ case class HashJoin(
/**
* :: Develope
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43680692
IMHO, slicing a sequence shouldn't change its element values
(floating-point representations), same for ```take``` and ```drop```.
---
If your project is set up for it,
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43675504
@ash211 In Python 2.X, it does promote an Int to Long when overflowing (it
still matters in doctests, where you have to be explicit about the result value
is 3 or 3L).
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43674365
Thanks for the contribution! Could use it in my own workflows.
Python ints are signed 32 bit numbers right? Should make that a long
explicitly unless python does
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43673656
@marmbrus I tried to implement the formula you gave on the mailing list.
Not sure if I missed anything. Pls take a look. Note I changed Count() to
return Long to match RD
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43672003
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43671863
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
GitHub user kanzhang opened a pull request:
https://github.com/apache/spark/pull/841
[SPARK-1822] SchemaRDD.count() should use optimizer
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/kanzhang/spark SPARK-1822
Alternatively you
Github user codedeft commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43667582
I'll try to get David to publish the latest breeze and change the project
file to reference the latest breeze.
---
If your project is set up for it, you can reply to thi
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43667509
Jenkins, test this again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user codedeft commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43666271
To clarify - it requires the latest breeze. The OWL-QN in breeze had bugs,
which I fixed. I'm not sure if David's published an official release yet but
it's in the latest
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665410
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15107/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43665383
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665407
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665406
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43665411
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15106/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43665408
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665409
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15108/
---
If your project
Github user codedeft commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43665097
jira link :
https://issues.apache.org/jira/browse/SPARK-1892
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
GitHub user codedeft opened a pull request:
https://github.com/apache/spark/pull/840
Adding OWL-QN optimizer for L1 regularizations. It can also handle L2 re...
Adding OWL-QN optimizer for L1 regularizations. It can also handle L2 and
L1 regularizations together (balanced with alpha
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-43663175
To throw another wrench into the Union analogy, there is also the
little-used SparkContext#union, which has signatures for both Seq[RDD[T]] and
varags RDD[T].
---
If
Github user douglaz commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-43659430
It isn't just about lines of code, it is about pollution of code using
`asInstanceOf` and runtime errors because of this and wrong pattern matching on
Sequences.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43658056
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43658039
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/838
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43656940
Another [solution](https://github.com/witgo/spark/compare/cachePoint).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43656301
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43656290
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/839
[Minor] Move JdbcRDDSuite to the correct package
It was in the wrong package
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/andrewor14/spark j
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43654562
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43654576
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43654286
@marmbrust
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/838
[Hotfix] Blacklisted flaky HiveCompatibility test
`lateral_view_outer` query sometimes returns a different set of 10 rows.
You can merge this pull request into a Git repository by running:
$ git pu
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12841208
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -166,45 +166,51 @@ private class MemoryStore(blockManager: BlockManager,
maxMem
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12840885
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -243,10 +250,13 @@ private class MemoryStore(blockManager: BlockManager,
maxMem
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12840780
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -166,45 +166,51 @@ private class MemoryStore(blockManager: BlockManager,
maxMem
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12840665
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -166,45 +166,51 @@ private class MemoryStore(blockManager: BlockManager,
maxMem
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/791#issuecomment-43618603
It is not MT safe because the PR is checking/modifiying shared state (like
dropping variable) in an unsafe manner.
I will comment in detail on the patch later today sin
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/791#issuecomment-43611475
As we know, memory store is used for add, read, remove blocks. Reading and
removing is quite simple, so let's focus on adding.
Adding may trigger dropping action, as
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43608897
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15105/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43608896
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43608898
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15104/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43608895
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43608181
@mateiz @mengxr
I added a new operation `cachePoint` of RDD
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user ghidi closed the pull request at:
https://github.com/apache/spark/pull/821
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/837#issuecomment-43605156
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/837#issuecomment-43605087
This is a solution with #418 from @marmbrus .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/837
add support for left semi join
Just submit another solution for #395
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/adrian-wang/spark left-se
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43604360
Hey @ghidi
Sorry I should have mentioned. In order to speed up the process (so that I
can cut another RC for Spark 1.0), I cloned your branch and made the fix myself
Github user ghidi commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43603009
I changed Thread.getContextClassLoader with
Utils.getContextOrSparkClassLoader.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43602468
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43602449
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43602467
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43602450
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user ueshin opened a pull request:
https://github.com/apache/spark/pull/836
[SPARK-1889] [SQL] Apply splitConjunctivePredicates to join condition while
finding join ke...
...ys.
When tables are equi-joined by multiple-keys `HashJoin` should be used, but
`CartesianPr
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43601623
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43601624
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15101/
---
If your project is set up for it, you can r
Github user ueshin commented on the pull request:
https://github.com/apache/spark/pull/825#issuecomment-43599450
@rxin Thank you for your comment.
I checked the code #734, not deeply yet, though.
It seems like broadcast hash join is used only for `Inner` join so
broadcast neste
1 - 100 of 124 matches
Mail list logo