hequn8128 commented on a change in pull request #7418: FLINK-11053 
Documentation - update scala sample code for bucketing sink according
URL: https://github.com/apache/flink/pull/7418#discussion_r246237842
 
 

 ##########
 File path: docs/dev/connectors/filesystem_sink.md
 ##########
 @@ -117,11 +117,11 @@ input.addSink(sink);
 </div>
 <div data-lang="scala" markdown="1">
 {% highlight scala %}
-val input: DataStream[Tuple2[IntWritable, Text]] = ...
+val input: DataStream[(IntWritable, Text)] = ???
 
-val sink = new BucketingSink[String]("/base/path")
-sink.setBucketer(new DateTimeBucketer[String]("yyyy-MM-dd--HHmm", 
ZoneId.of("America/Los_Angeles")))
-sink.setWriter(new SequenceFileWriter[IntWritable, Text]())
+val sink = new BucketingSink[(IntWritable, Text)]("/base/path")
+sink.setBucketer(new DateTimeBucketer("yyyy-MM-dd--HHmm"))
+sink.setWriter(new StringWriter[(IntWritable, Text)]())
 
 Review comment:
   @123avi What I mean is don't use scala tuple. Use java tuple even for the 
scala example. 
   `val input: DataStream[Tuple2[A, B]]` is different from `val input: 
DataStream[(A, B)]`. `org.apache.flink.api.java.tuple.Tuple2` is a class in 
Flink.
   
   I wrote a sample code for you. Take a look at the code 
[here](https://github.com/hequn8128/flink/blob/bucketSinkTest/flink-connectors/flink-connector-filesystem/src/test/scala/org/apache/flink/streaming/connectors/fs/bucketing/ScalaBucketingSinkTest.scala#L62).
 You can also try to run the test. It works well.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to