Spark can be a consumer and a producer from the Kafka point of view.

You can create a kafka client in Spark that registers to a topic and reads the 
feeds, and you can process data in Spark and generate a producer that sends 
that data into a topic.
So, Spark lies next to Kafka and you can use Kafka as a channel to collect and 
send the data.

That’s what I am doing, at least.

> On 22 Mar 2017, at 08:08, Adaryl Wakefield <adaryl.wakefi...@hotmail.com> 
> wrote:
> 
> I’m a little confused on how to use Kafka and Spark together. Where exactly 
> does Spark lie in the architecture? Does it sit on the other side of the 
> Kafka producer? Does it feed the consumer? Does it pull from the consumer?
> 
> Adaryl "Bob" Wakefield, MBA
> Principal
> Mass Street Analytics, LLC
> 913.938.6685
> www.massstreet.net <http://www.massstreet.net/>
> www.linkedin.com/in/bobwakefieldmba 
> <http://www.linkedin.com/in/bobwakefieldmba>
> Twitter: @BobLovesData

Attachment: signature.asc
Description: Message signed with OpenPGP

Reply via email to