Hi All,
I am trying to run Kafka Word Count Program.
please find below, the link for the same
https://github.com/apache/spark/blob/master/examples/scala-2.10/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java
I have set spark master to setMaster(local[*])
and I have
append spark to my log.text file
Spark program gives output as
spark 1
which should be spark 3.
So how to handle this in Spark code.
Thanks and regards
Shweta Jadhav
-Sean Owen so...@cloudera.com wrote: -
To: Jadhav Shweta jadhav.shw...@tcs.com
From: Sean Owen so...@cloudera.com
Date: 02
exactly I should assign in Integer newSum = ... // add
the new values with the previous running count to get the new count
Thanks and regards
Shweta Jadhav
-VISHNU SUBRAMANIAN johnfedrickena...@gmail.com wrote: -
To: Jadhav Shweta jadhav.shw...@tcs.com
From: VISHNU SUBRAMANIAN
Hi,
I am running streaning word count program in Spark Standalone mode cluster,
having four machines in cluster.
public final class JavaKafkaStreamingWordCount {
private static final Pattern SPACE = Pattern.compile( );
static transient Configuration conf;
private
Hi,
I am trying one transformation by calling scala method
this scala method returns MutableList[AvroObject]
def processRecords(id: String, list1: Iterable[(String, GenericRecord)]):
scala.collection.mutable.MutableList[AvroObject]
Hence, the output of transaformation is
Hi,
I am trying one transformation by calling scala method
this scala method returns MutableList[AvroObject]
def processRecords(id: String, list1: Iterable[(String, GenericRecord)]):
scala.collection.mutable.MutableList[AvroObject]
Hence, the output of transaformation is