Gino,
I can confirm that your solution of assembling with spark-streaming-kafka
but excluding spark-core and spark-streaming has me working with basic
spark-submit. As mentioned you must specify the assembly jar in the
SparkConfig as well as to spark-submit.
When I see the error you are now
I am using Spark 1.0.0 compiled with Hadoop 1.2.1.
I have a toy spark-streaming-kafka program. It reads from a kafka queue and
does
stream
.map {case (k, v) = (v, 1)}
.reduceByKey(_ + _)
.print()
using a 1 second interval on the stream.
The docs say to make Spark and
Technologies http://www.nubetech.co
http://in.linkedin.com/in/sonalgoyal
On Thu, Mar 27, 2014 at 10:04 AM, lannyripple [hidden
email]http://user/SendEmail.jtp?type=nodenode=3417i=0
wrote:
Hi all,
I've got something which I think should be straightforward but it's not so
I'm not getting
/in/sonalgoyal
On Thu, Mar 27, 2014 at 10:04 AM, lannyripple [hidden
email]http://user/SendEmail.jtp?type=nodenode=3417i=0
wrote:
Hi all,
I've got something which I think should be straightforward but it's not
so
I'm not getting it.
I have an 8 node spark 0.9.0 cluster also running HDFS
Hi all,
I've got something which I think should be straightforward but it's not so
I'm not getting it.
I have an 8 node spark 0.9.0 cluster also running HDFS. Workers have 16g of
memory using 8 cores.
In HDFS I have a CSV file of 110M lines of 9 columns (e.g., [key,a,b,c...]).
I have another