I think we are near the end of Scala 2.9.3 development, and will merge the
Scala 2.10 branch into master and make it the future very soon (maybe next
week). This problem will go away.
Meantime, we are relying on periodically merging the master into the Scala
2.10 branch.
On Mon, Nov 4, 2013 at
I plan to do the work on scala-2.10 branch, which already move to akka 2.2.3,
hope that to move to akka 2.3-M1 (which support protobuf 2.5.x) will not cause
many problem and make it a test to see is there further issues, then wait for
the formal release of akka 2.3.x
While the issue is that I c
Adding in a few guys so they can chime in.
On Mon, Nov 4, 2013 at 4:33 PM, Reynold Xin wrote:
> I chatted with Matt Massie about this, and here are some options:
>
> 1. Use dependency injection in google-guice to make Akka use one version
> of protobuf, and YARN use the other version.
>
> 2. Lo
Hi Community,
I checked out Spark v0.7.3 tag from github and I am the using it in
standalone mode on a small cluster with 2 nodes. My Spark application
reads files from hdfs and I writes the results back to hdfs.
Everything seems to be working fine except that at the very end of
execution I get th
I chatted with Matt Massie about this, and here are some options:
1. Use dependency injection in google-guice to make Akka use one version of
protobuf, and YARN use the other version.
2. Look into OSGi to accomplish the same goal.
3. Rewrite the messaging part of Spark to use a simple, custom RP
I guess it's defined in Hadoop library. You can try to download the Hadoop
source code, or use some IDE to solve the dependency issue automatically, I
am using IntelliJ Idea community version.
On Mon, Nov 4, 2013 at 12:09 PM, Umar Javed wrote:
> In 'SparkHadoopUtil.scala'
> in /core/src/main/sc
Rebasing changes the SHAs, which isn't a good idea in a public and
heavily-used repository.
On Mon, Nov 4, 2013 at 1:04 AM, Liu, Raymond wrote:
> Hi
> It seems to me that dev branches are sync with master by keep
> merging trunk codes. E.g. scala-2.10 branches continuously merge latest
In 'SparkHadoopUtil.scala'
in /core/src/main/scala/org/apache/spark/deploy/, there is a method:
def newConfiguration(): Configuration = new Configuration()
There is a header that imports Configuration : import
org.apache.hadoop.conf.Configuration
But I'm unable to find the definition of Configu
Hi
It seems to me that dev branches are sync with master by keep merging
trunk codes. E.g. scala-2.10 branches continuously merge latest master code
into itself for update.
While I am wondering, what's the general guide line on doing this? It
seems to me that not every code in m