Github user velvia commented on the issue:
https://github.com/apache/spark/pull/15219
Hi, in my experience, interfaces which are implemented by final methods are
inlined just fine in the JVM JITâ¦. but you might know more about this.
The point is more simplification and that the
Github user velvia commented on the issue:
https://github.com/apache/spark/pull/15219
@kiszk I had a quick look at your `ColumnarVector` and had some thoughts.
Right now, it seems `ColumnVector` as an abstract class mixes lots of
functionalities into one.
- Methods
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/10593#issuecomment-169412883
@nongli this is awesome. I suppose changes to the logical/physical query
planner are coming also? Is that going to be in a separate PR?
---
If your project is set up
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/10593#discussion_r48991290
--- Diff: core/src/test/scala/org/apache/spark/Benchmark.scala ---
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/2219#issuecomment-3790
@pwendell Ok, modified it, it's now just a tiny addition of a menu item to
point to the wiki page.
---
If your project is set up for it, you can reply to this emai
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/2219#issuecomment-2303
Let me modify this PR to point to the wiki page, sounds good? I think
having that menu item is still helpful
On Sun, Sep 14, 2014 at 9:20 PM, Patrick
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/2219#issuecomment-5471
Hi Patrick,
I was going to create a new community projects page, but if you think they
can stay on the powered by page but on top, that's fin
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/2219#issuecomment-54251980
@mateiz Sure, my username is "velvia". I can create a new wiki page with
the contents of the page in this PR. Thanks!
---
If your project is set up for i
GitHub user velvia opened a pull request:
https://github.com/apache/spark/pull/2219
Add a Community Projects page
This adds a new page to the docs listing community projects -- those
created outside of Apache Spark that are of interest to the community of Spark
users. Anybody
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/2077#issuecomment-53846867
Ah yes, the job group.
On Fri, Aug 29, 2014 at 12:47 AM, Reynold Xin
wrote:
> Couple different alternatives:
>
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/2077#issuecomment-53846528
@rxin what characteristic of each stage allows you to group relevant stages
together into "jobs"? For example the job server itself can run multiple
"jo
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/800#issuecomment-43302348
@aarondav @pwendell I agree we don't want to clean up currently running
apps, but also that this should be default to on when its fixed. Maybe its as
simple as che
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/215#issuecomment-40295287
Ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user velvia closed the pull request at:
https://github.com/apache/spark/pull/215
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-39800233
@pwendell New PR has been opened:
https://github.com/apache/spark/pull/351
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user velvia opened a pull request:
https://github.com/apache/spark/pull/351
Spark-1230: Enable SparkContext.addJars() to load jars absent from CLASSPATH
This is an up-merge of @pwendell's PR,
https://github.com/apache/spark/pull/119, which was itself an up-merge
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39691647
Thanks Patrick!
-Evan
To be free is not merely to cast off one's chains, but to live in a way
that respects & enhances the freedom of others. (#Nels
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39623424
Hey guys - think everything is addressed. Jenkins claims the build passes,
but for some reason there is a Failed message at the bottom of the PR. ?
---
If your project
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r11307479
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39505488
@markhamstra Thanks for the tips. The `git push -f` worked.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39489767
Mark, thanks. Yeah, I realize instead of doing a merge I should just do a
pull now, but it's probably too late for this PR. Instead I'll close it
and
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39421885
Sorry, but how do I do a rebase safely, now that I've pushed?I've never
had good luck with rebases (other than rebase -i before I push sorry
s
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39399074
@pwendell I did a `git merge master` and for some reason it didn't display
as a merge. :(must have gotten confused about multiple remotes.
---
If your proje
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/288#issuecomment-39398967
Most comments addressed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/288#discussion_r11233834
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -331,6 +350,7 @@ private[spark] class Worker(
}
private[spark
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/288#discussion_r11231197
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -154,5 +154,18 @@ class UtilsSuite extends FunSuite {
val iterator
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/288#discussion_r11231217
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -179,12 +184,26 @@ private[spark] class Worker(
registered = true
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/288#discussion_r11231160
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -537,6 +537,21 @@ private[spark] object Utils extends Logging
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/288#discussion_r11230586
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -64,6 +64,11 @@ private[spark] class Worker(
val REGISTRATION_TIMEOUT
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r11217320
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,18 @@ class SparkContext(
val isLocal = (master == "
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r11216243
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user velvia commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r11216177
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -767,6 +781,20 @@ class SparkContext(
case _ =>
p
GitHub user velvia opened a pull request:
https://github.com/apache/spark/pull/288
SPARK-1154: Clean up app folders in worker nodes
This is a fix for
[SPARK-1154](https://issues.apache.org/jira/browse/SPARK-1154). The issue is
that worker nodes fill up with a huge number of app
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/215#issuecomment-38775813
So I was just looking at this. fastutil is used in about 15 different
files, and mostly what is used is:
- A few of the hashmaps
- import
Github user velvia commented on the pull request:
https://github.com/apache/spark/pull/215#issuecomment-38514709
This is the changelog. One thing not noted here is that one of the APIs in
the hash maps, add(), has also been deprecated and is not available in
newer versions
GitHub user velvia opened a pull request:
https://github.com/apache/spark/pull/215
SPARK-1057: Upgrade fastutil to 6.5.11
A really simple PR to upgrade the fastutil library to one of the latest
versions, 6.5.11; a couple deprecated APIs are removed and a couple nice fixes
to
36 matches
Mail list logo