Ah - we should update it to suggest mailing the dev@ list (and if
there is enough traffic maybe do something else).
I'm happy to add you if you can give an organization name, URL, a list
of which Spark components you are using, and a short description of
your use case..
On Mon, Feb 9, 2015 at
Hi Judy,
If you have added source files in the sink/ source folder, they should
appear in the assembly jar when you build. One thing I noticed is that you
are looking inside the /dist folder. That only gets populated if you run
make-distribution. The normal development process is just to do mvn
Hello,
Working on SPARK-5708https://issues.apache.org/jira/browse/SPARK-5708 - Add
Slf4jSink to Spark Metrics Sink.
Wrote a new Slf4jSink class (see patch attached), but the new class is not
packaged as part of spark-assembly jar.
Do I need to update build config somewhere to have this
Thanks Patrick! That was the issue.
Built the jars on windows env with mvn and forgot to run make-distributions.ps1
afterward, so was looking at old jars.
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: Monday, February 9, 2015 10:43 PM
To: Judy Nash
Cc: dev@spark.apache.org
Subject:
Great - perhaps we can move this discussion off-list and onto a JIRA
ticket? (Here's one: https://issues.apache.org/jira/browse/SPARK-5705)
It seems like this is going to be somewhat exploratory for a while (and
there's probably only a handful of us who really care about fast linear
algebra!)
-
Thanks Denny; added you.
Matei
On Feb 9, 2015, at 10:11 PM, Denny Lee denny.g@gmail.com wrote:
Forgot to add Concur to the Powered by Spark wiki:
Concur
https://www.concur.com
Spark SQL, MLLib
Using Spark for travel and expenses analytics and personalization
Thanks!
Denny
Thanks Matei - much appreciated!
On Mon Feb 09 2015 at 10:23:57 PM Matei Zaharia matei.zaha...@gmail.com
wrote:
Thanks Denny; added you.
Matei
On Feb 9, 2015, at 10:11 PM, Denny Lee denny.g@gmail.com wrote:
Forgot to add Concur to the Powered by Spark wiki:
Concur
Maybe you can ask prof john canny himself:-) as I invited him to give a talk
at Alpine data labs in March's meetup (SF big Analytics SF machine learning
joined meetup) , 3/11. To be announced in next day or so.
Chester
Sent from my iPhone
On Feb 9, 2015, at 4:48 PM, Ulanov, Alexander
it sounds like nobody intends these to be used to actually deploy Spark
I wouldn't go quite that far. What we have now can serve as useful input
to a deployment tool like Chef, but the user is then going to need to add
some customization or configuration within the context of that tooling to
I like the `/* .. */` style more. Because it is easier for IDEs to
recognize it as a block comment. If you press enter in the comment
block with the `//` style, IDEs won't add `//` for you. -Xiangrui
On Wed, Feb 4, 2015 at 2:15 PM, Reynold Xin r...@databricks.com wrote:
We should update the
Hi,
I checked the powered by wiki too and Agile Labs should be Agile Lab. The link
is wrong too, it should be www.agilelab.it.
The description is correct.
Thanks a lot
Paolo
Inviata dal mio Windows Phone
Da: Denny Leemailto:denny.g@gmail.com
Inviato:
Cool, thanks! Let me know if there are any more core numerical libraries
that you'd like to see to support Spark with optimised natives using a
similar packaging model at netlib-java.
I'm interested in fast random number generation next, and I keep wondering
if anybody would be interested in
Hi Iulian,
I think the AkakUtilsSuite failure that you observed has been fixed in
https://issues.apache.org/jira/browse/SPARK-5548 /
https://github.com/apache/spark/pull/4343
On February 9, 2015 at 5:47:59 AM, Iulian Dragoș (iulian.dra...@typesafe.com)
wrote:
Hi Patrick,
Thanks for the
I have wondered whether we should sort of deprecated it more
officially, since otherwise I think people have the reasonable
expectation based on the current code that Spark intends to support
complete Debian packaging as part of the upstream build. Having
something that's sort-of maintained but no
+1 to an official deprecation + redirecting users to some other project
that will or already is taking this on.
Nate?
On Mon Feb 09 2015 at 10:08:27 AM Patrick Wendell pwend...@gmail.com
wrote:
I have wondered whether we should sort of deprecated it more
officially, since otherwise I think
Hi Evan,
Thank you for explanation and useful link. I am going to build OpenBLAS, link
it with Netlib-java and perform benchmark again.
Do I understand correctly that BIDMat binaries contain statically linked Intel
MKL BLAS? It might be the reason why I am able to run BIDMat not having MKL
...to help w/the build backlog. let's all welcome
amp-jenkins-slave-{01..03} back to the fray!
I've noticed a couple oddities with the pyspark.daemons which are causing us
a bit of memory problems within some of our heavy spark jobs, especially
when they run at the same time...
It seems that there is typically a 1-to-1 ratio of pyspark.daemons to cores
per executor during aggregations. By
Btw, I think allowing `/* ... */` without the leading `*` in lines is
also useful. Check this line:
https://github.com/apache/spark/pull/4259/files#diff-e9dcb3b5f3de77fc31b3aff7831110eaR55,
where we put the R commands that can reproduce the test result. It is
easier if we write in the following
Old releases can't be changed, but new ones can. This was merged into
the 1.3 branch for the upcoming 1.3.0 release.
If you really had to, you could do some surgery on existing
distributions to swap in/out Jackson.
On Mon, Feb 9, 2015 at 11:22 AM, Gil Vernik g...@il.ibm.com wrote:
Hi All,
I
Hi All,
I understand that https://github.com/apache/spark/pull/3938 was closed and
merged into Spark? And this suppose to fix this Jackson issue.
If so, is there any way to update binary distributions of Spark so that it
will contain this fix? Current binary versions of Spark available for
This is a straw poll to assess whether there is support to keep and
fix, or remove, the Debian packaging-related config in Spark.
I see several oldish outstanding JIRAs relating to problems in the packaging:
https://issues.apache.org/jira/browse/SPARK-1799
Clearly there isn't a strictly optimal commenting format (pro's and
cons for both '//' and '/*'). My thought is for consistency we should
just chose one and put in the style guide.
On Mon, Feb 9, 2015 at 12:25 PM, Xiangrui Meng men...@gmail.com wrote:
Btw, I think allowing `/* ... */` without
Hi All,
I've just posted the 1.2.1 maintenance release of Apache Spark. We
recommend all 1.2.0 users upgrade to this release, as this release
includes stability fixes across all components of Spark.
- Download this release: http://spark.apache.org/downloads.html
- View the release notes:
Why don't we just pick // as the default (by encouraging it in the style
guide), since it is mostly used, and then do not disallow /* */? I don't
think it is that big of a deal to have slightly deviations here since it is
dead simple to understand what's going on.
On Mon, Feb 9, 2015 at 1:33 PM,
+1 to what Andrew said, I think both make sense in different situations and
trusting developer discretion here is reasonable.
On Mon, Feb 9, 2015 at 1:48 PM, Andrew Or and...@databricks.com wrote:
In my experience I find it much more natural to use // for short multi-line
comments (2 or 3
In my experience I find it much more natural to use // for short multi-line
comments (2 or 3 lines), and /* */ for long multi-line comments involving
one or more paragraphs. For short multi-line comments, there is no reason
not to use // if it just so happens that your first line exceeded 100
Hi Patrick,
Thanks for the heads up. I was trying to set up our own infrastructure for
testing Spark (essentially, running `run-tests` every night) on EC2. I
stumbled upon a number of flaky tests, but none of them look similar to
anything in Jira with the flaky-test tag. I wonder if there's
What about this straw man proposal: deprecate in 1.3 with some kind of
message in the build, and remove for 1.4? And add a pointer to any
third-party packaging that might provide similar functionality?
On Mon, Feb 9, 2015 at 6:47 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
+1 to an
+spark dev list
Yes, we should add an Apache license to it -- Feel free to open a PR for
it. BTW though it is a part of the mesos github account, it is almost
exclusively used by the Spark Project AFAIK.
Longer term it may make sense to move it to a more appropriate github
account (we could move
This could be something if the spark community wanted to not maintain debs/rpms
directly via the project could direct interested efforts towards apache bigtop.
Right now debs/rpms of bigtop components, as well as related tests is a focus.
Something that would be great is if at least one spark
Hi,
The mail id given in
https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark seems
to be failing. Can anyone tell me how to get added to Powered By Spark list?
--
Regards,
*Meethu*
32 matches
Mail list logo