[ https://issues.apache.org/jira/browse/FLINK-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15666756#comment-15666756 ]
Jakub Nowacki commented on FLINK-4498: -------------------------------------- Adding to the above, Scala example in the documentation is in fact Java code and does not work. The Scala code to create a sink looks as follows: {code:java} CassandraSink.addSink(input.javaStream) .setClusterBuilder(new ClusterBuilder() { @Override def buildCluster(builder: Cluster.Builder): Cluster = { builder.addContactPoint("127.0.0.1").build() } }) .build() {code} > Better Cassandra sink documentation > ----------------------------------- > > Key: FLINK-4498 > URL: https://issues.apache.org/jira/browse/FLINK-4498 > Project: Flink > Issue Type: Improvement > Components: Cassandra Connector, Documentation > Affects Versions: 1.1.0 > Reporter: Elias Levy > > The Cassandra sink documentation is somewhat muddled and could be improved. > For instance, the fact that is only supports tuples and POJO's that use > DataStax Mapper annotations is only mentioned in passing, and it is not clear > that the reference to tuples only applies to Flink Java tuples and not Scala > tuples. > The documentation also does not mention that setQuery() is only necessary for > tuple streams. > The explanation of the write ahead log could use some cleaning up to clarify > when it is appropriate to use, ideally with an example. Maybe this would be > best as a blog post to expand on the type of non-deterministic streams this > applies to. > It would also be useful to mention that tuple elements will be mapped to > Cassandra columns using the Datastax Java driver's default encoders, which > are somewhat limited (e.g. to write to a blob column the type in the tuple > must be a java.nio.ByteBuffer and not just a byte[]). -- This message was sent by Atlassian JIRA (v6.3.4#6332)