Like this?

val add_msgs = KafkaUtils.createDirectStream[String, String, StringDecoder,
StringDecoder](
      ssc, kafkaParams, Array("add").toSet)

val delete_msgs = KafkaUtils.createDirectStream[String, String,
StringDecoder, StringDecoder](
      ssc, kafkaParams, Array("delete").toSet)

val update_msgs = KafkaUtils.createDirectStream[String, String,
StringDecoder, StringDecoder](
      ssc, kafkaParams, Array("update").toSet)

val merge_msgs = KafkaUtils.createDirectStream[String, String,
StringDecoder, StringDecoder](
      ssc, kafkaParams, Array("merge").toSet)


It should be fine if your batch duration is same for all. Now, if you want
a single stream with all data in it, then you can do like:

val all_msgs = KafkaUtils.createDirectStream[String, String, StringDecoder,
StringDecoder](
      ssc, kafkaParams, Array("delete","add","update","merge").toSet)

Thanks
Best Regards

On Fri, Jun 19, 2015 at 2:26 PM, Manohar753 <manohar.re...@happiestminds.com
> wrote:

> Hi Everybody,
>
> I have four kafks topics each for
> separateoperation(Add,Delete,Update,Merge).
> so spark also will have four consumed streams,so how we can run my spark
> job
> here?
>
> should i run four spark jobs separately?
> is there any way to bundle all streams  into singlejar and run as single
> Job?
>
> Thanks in Advance.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/N-kafka-topics-vs-N-spark-Streaming-tp23408.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to