Re: What does Blockchain technology mean for Big Data? And how Hadoop/Spark will play role with it?

2017-12-19 Thread Ryan C. Kleck
IMO blockchain won’t be doing anything for big data anytime soon. It is not distributed (it’s “decntralized”). All blockchain data is replicated to ALL nodes in the network. At the moment there is a game involving breeding cats that is already clogging up ethereum blockchain. Blockchain might

Re: What does Blockchain technology mean for Big Data? And how Hadoop/Spark will play role with it?

2017-12-19 Thread Vadim Semenov
I think it means that we can replace HDFS with a blockchain-based FS, and then offload some processing to smart contracts. On Mon, Dec 18, 2017 at 11:59 PM, KhajaAsmath Mohammed < mdkhajaasm...@gmail.com> wrote: > I am looking for same answer too .. will wait for response from other > people > >

Re: What does Blockchain technology mean for Big Data? And how Hadoop/Spark will play role with it?

2017-12-18 Thread KhajaAsmath Mohammed
I am looking for same answer too .. will wait for response from other people Sent from my iPhone > On Dec 18, 2017, at 10:56 PM, Gaurav1809 wrote: > > Hi All, > > Will Bigdata tools & technology work with Blockchain in future? Any possible > use cases that anyone is

What does Blockchain technology mean for Big Data? And how Hadoop/Spark will play role with it?

2017-12-18 Thread Gaurav1809
Hi All, Will Bigdata tools & technology work with Blockchain in future? Any possible use cases that anyone is likely to face, please share. Thanks Gaurav -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ -

RE: Spark and Play

2014-11-13 Thread Mohammed Guller
Hi Patrick, Although we are able to use Spark 1.1.0 with Play 2.2.x, as you mentioned, Akka incompatibility prevents us from using Spark with the current stable releases of Play (2.3.6) and Akka (2.3.7). Are there any plans to address this issue in Spark 1.2? Thanks, Mohammed From: John

Re: Spark and Play

2014-11-12 Thread Donald Szeto
/develop/core/src/main/scala/workflow/CreateServer.scala Regards, Donald ᐧ On Tue, Nov 11, 2014 at 11:35 PM, John Meehan jnmee...@gmail.com wrote: You can also build a Play 2.2.x + Spark 1.1.0 fat jar with sbt-assembly for, e.g. yarn-client support or using with spark-shell for debugging

Spark and Play

2014-11-11 Thread Akshat Aranya
Hi, Sorry if this has been asked before; I didn't find a satisfactory answer when searching. How can I integrate a Play application with Spark? I'm getting into issues of akka-actor versions. Play 2.2.x uses akka-actor 2.0, whereas Play 2.3.x uses akka-actor 2.3.4, neither of which work fine

Re: Spark and Play

2014-11-11 Thread Patrick Wendell
Hi There, Because Akka versions are not binary compatible with one another, it might not be possible to integrate Play with Spark 1.1.0. - Patrick On Tue, Nov 11, 2014 at 8:21 AM, Akshat Aranya aara...@gmail.com wrote: Hi, Sorry if this has been asked before; I didn't find a satisfactory

RE: Spark and Play

2014-11-11 Thread Mohammed Guller
Actually, it is possible to integrate Spark 1.1.0 with Play 2.2.x Here is a sample build.sbt file: name := xyz version := 0.1 scalaVersion := 2.10.4 libraryDependencies ++= Seq( jdbc, anorm, cache, org.apache.spark %% spark-core % 1.1.0, com.typesafe.akka %% akka-actor % 2.2.3

Re: Spark and Play

2014-11-11 Thread John Meehan
You can also build a Play 2.2.x + Spark 1.1.0 fat jar with sbt-assembly for, e.g. yarn-client support or using with spark-shell for debugging: play.Project.playScalaSettings libraryDependencies ~= { _ map { case m if m.organization == com.typesafe.play = m.exclude(commons-logging, commons