Hi!

If there is a CassandraSource for Hadoop, you can also use that with the
HadoopInputFormatWrapper.

If you want to implement a Flink-specific source, extending InputFormat is
the right thing to do. A user has started to implement a cassandra sink in
this fork (may be able to reuse some code or testing infrastructure):
https://github.com/rzvoncek/flink/tree/zvo/cassandraSink

Greetings,
Stephan





On Thu, Jul 2, 2015 at 11:34 AM, tambunanw <if05...@gmail.com> wrote:

> Hi All,
>
> I want to if there's a custom data source available for Cassandra ?
>
> From my observation seems that we need to implement that by extending
> InputFormat. Is there any guide on how to do this robustly ?
>
>
> Cheers
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Data-Source-from-Cassandra-tp1908.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>

Reply via email to