Using spark.streaming.concurrentJobs for this probably isn't a good idea, as it allows the next batch to start processing before current one is finished, which may have unintended consequences.
Why can't you use a single stream with all the topics you care about, or multiple streams if you're e.g. joining them? On Wed, Dec 16, 2015 at 3:00 PM, jpocalan <jpoca...@gmail.com> wrote: > Nevermind, I found the answer to my questions. > The following spark configuration property will allow you to process > multiple KafkaDirectStream in parallel: > --conf spark.streaming.concurrentJobs=<something greater than 1> > > > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-streaming-from-multiple-topics-tp8678p25723.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >