This would be Michael and Guozhang job to answer, but I'd look at two
options if I were you:

1) If the connector you need exists (
http://www.confluent.io/product/connectors), then you just need to run it.
It is just a simple REST API (submit job, job status, etc), so I wouldn't
count it as "learning a framework".

2) I believe I've seen people implement "writes to database" in
KafkaProcessor. Maybe try to google / search the mailing list? Guozhang and
Michael can probably add details.

Gwen

On Wed, Jun 1, 2016 at 11:31 PM, Avi Flax <avi.f...@parkassist.com> wrote:

> On 6/1/16, 11:53, "Gwen Shapira" <g...@confluent.io> wrote:
>
> > Currently this is not part of the DSL and needs to be done separately
> > through KafkaConnect. Here's an example:
> > http://www.confluent.io/blog/hello-world-kafka-connect-kafka-streams
>
> Ah, I see! Thank you! And thanks for the super-fast reply!
>
> > In the future we want to integrate Connect and Streams better, so you
> could
> > do something like "builder.stream(..).fromCassandra()" and
> > "stream.toElastic()" and a connector will be added that does the "last
> > mile" of data integration.
>
> That sounds great!
>
> In the meantime, while I understand that Connect is the recommended
> approach for this sort of thing, I’d like to avoid learning two frameworks
> simultaneously if possible — and not just for me, but also for my team; is
> it possible to use the low-level Streams API to implement the sources and
> sinks I’ve described? And if so, is it possible to integrate sources and
> sinks defined with that lower-level API with a topology defined with the
> DSL?
>
> Thanks so much!
> Avi
>
> Software Architect @ Park Assist » We’re hiring!
>
>

Reply via email to