I don't think it's useful to combine them since they are different
projects. But I do think that a lot of work went into Flink's paged
memory system built on byte buffers and if collaboration can take place
to pop that out into like a memory subsystem library that both Spark and
Flink can use
I think as long as the two frameworks follow the same paradigm for how their
interfaces work it’s fine to have 2 competing frameworks. This way the
frameworks have some motivation
to be the best at what they do rather than being the only choice whether you
like it or not. They also seem to have
+1 for this think it's high time.
We should of course do it with enough warning for users. 1.4 May be too early
(not for me though!). Perhaps we specify that 1.5 will officially move to JDK7?
—
Sent from Mailbox
On Fri, May 1, 2015 at 12:16 AM, Ram Sriharsha
Following what Ted said, if you leverage the `mvn` from within the
`build/` directory of Spark you¹ll get zinc for free which should help
speed up build times.
On 5/1/15, 9:45 AM, Ted Yu yuzhih...@gmail.com wrote:
Pramod:
Please remember to run Zinc so that the build is faster.
Cheers
On Fri,
Hi Pramod,
For cluster-like tests you might want to use the same code as in mllib's
LocalClusterSparkContext. You can rebuild only the package that you change and
then run this main class.
Best regards, Alexander
-Original Message-
From: Pramod Biligiri
Pramod:
Please remember to run Zinc so that the build is faster.
Cheers
On Fri, May 1, 2015 at 9:36 AM, Ulanov, Alexander alexander.ula...@hp.com
wrote:
Hi Pramod,
For cluster-like tests you might want to use the same code as in mllib's
LocalClusterSparkContext. You can rebuild only the
Hi Reynold,
Pls find the PR here [1]
[1] https://github.com/apache/spark/pull/5832
On Thu, Apr 30, 2015 at 11:34 AM, Reynold Xin r...@databricks.com wrote:
We should change the trait to abstract class, and then your problem will
go away.
Do you want to submit a pull request?
On Wed, Apr
On 30 Apr 2015, at 21:40, Marcelo Vanzin van...@cloudera.com wrote:
As for the idea, I'm +1. Spark is the only reason I still have jdk6
around - exactly because I don't want to cause the issue that started
this discussion (inadvertently using JDK7 APIs). And as has been
pointed out, even
Hi,
I'm making some small changes to the Spark codebase and trying it out on a
cluster. I was wondering if there's a faster way to build than running the
package target each time.
Currently I'm using: mvn -DskipTests package
All the nodes have the same filesystem mounted at the same mount point.
I have the real DEBS-TAxi data in csv file , in order to operate over it
how to simulate a Spout kind of thing as event generator using the
timestamps in CSV file.
--
Thanks Regards,
Anshu Shukla
Hi Pramod,
If you are using sbt as your build, then you need to do sbt assembly once
and use sbt ~compile. Also export SPARK_PREPEND_CLASSES=1 this in your
shell and all nodes.
You can may be try this out ?
Thanks,
Prashant Sharma
On Fri, May 1, 2015 at 2:16 PM, Pramod Biligiri
On 1 May 2015 at 21:26, Dean Wampler deanwamp...@gmail.com wrote:
FWIW, another reason to start planning for deprecation of Java 7, too, is
that Scala 2.12 will require Java 8. Scala 2.12 will be released early next
year.
Will 2.12 be the release that based on dotty
FWIW, another reason to start planning for deprecation of Java 7, too, is
that Scala 2.12 will require Java 8. Scala 2.12 will be released early next
year.
Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
http://shop.oreilly.com/product/0636920033073.do (O'Reilly)
Typesafe
No. That will be 3.0 some day
Sent from my rotary phone.
On May 1, 2015, at 9:04 AM, Steven Shaw ste...@steshaw.org wrote:
On 1 May 2015 at 21:26, Dean Wampler deanwamp...@gmail.com wrote:
FWIW, another reason to start planning for deprecation of Java 7, too, is
that Scala 2.12 will
it seems spark is happy to upgrade scala, drop older java versions, upgrade
incompatible library versions (akka), and all of this within spark 1.x
does the 1.x mean anything in terms of compatibility of dependencies? or is
that limited to its own api? what are the rules?
On May 1, 2015 9:04 AM,
15 matches
Mail list logo