case ?
Have you thought of doing batching in the workers ?
Cheers
On Sat, Mar 7, 2015 at 10:54 PM, A.K.M. Ashrafuzzaman
ashrafuzzaman...@gmail.com wrote:
While processing DStream in the Spark Programming Guide, the suggested
usage of connection is the following,
dstream.foreachRDD(rdd
. Ashrafuzzaman
Lead Software Engineer
NewsCred
(M) 880-175-5592433
Twitter | Blog | Facebook
Check out The Academy, your #1 source
for free content marketing resources
Thanks Chris,
That is what I wanted to know :)
A.K.M. Ashrafuzzaman
Lead Software Engineer
NewsCred
(M) 880-175-5592433
Twitter | Blog | Facebook
Check out The Academy, your #1 source
for free content marketing resources
On Mar 2, 2015, at 2:04 AM, Chris Fregly ch...@fregly.com wrote:
hey
Sorry guys may bad,
Here is a high level code sample,
val unionStreams = ssc.union(kinesisStreams)
unionStreams.foreachRDD(rdd = {
rdd.foreach(tweet =
val strTweet = new String(tweet, UTF-8)
val interaction = InteractionParser.parser(strTweet)
interactionDAL.insert(interaction)
)
and
spark streaming.
A.K.M. Ashrafuzzaman
Lead Software Engineer
NewsCred
(M) 880-175-5592433
Twitter | Blog | Facebook
Check out The Academy, your #1 source
for free content marketing resources
have lesser number of workers than number
of shards. Makes sense?
On Sun Dec 14 2014 at 10:06:36 A.K.M. Ashrafuzzaman
ashrafuzzaman...@gmail.com wrote:
Thanks Aniket,
The trick is to have the #workers = #shards + 1. But I don’t know why is
that.
http://spark.apache.org/docs/latest/streaming
the issue. I will do a memory leak test. But this
is a simple and small application. I don’t see a leak there with naked eyes.
Can any one help me with how I should investigate?
A.K.M. Ashrafuzzaman
Lead Software Engineer
NewsCred
(M) 880-175-5592433
Twitter | Blog | Facebook
Check out
.
A.K.M. Ashrafuzzaman
Lead Software Engineer
NewsCred
(M) 880-175-5592433
Twitter | Blog | Facebook
Check out The Academy, your #1 source
for free content marketing resources
On Nov 26, 2014, at 6:23 PM, A.K.M. Ashrafuzzaman ashrafuzzaman...@gmail.com
wrote:
Hi guys,
When we are using Kinesis
from EC2 and now the kinesis
is getting consumed.
4 cores Single machine - works
2 cores Single machine - does not work
2 cores 2 workers - does not work
So my question is that do we need a cluster of (#KinesisShards + 1) workers to
be able to consume from Kinesis?
A.K.M. Ashrafuzzaman
Lead
using,
scala: 2.10.4
java version: 1.8.0_25
Spark: 1.1.0
spark-streaming-kinesis-asl: 1.1.0
A.K.M. Ashrafuzzaman
Lead Software Engineer
NewsCred
(M) 880-175-5592433
Twitter | Blog | Facebook
Check out The Academy, your #1 source
for free content marketing resources
10 matches
Mail list logo