[ https://issues.apache.org/jira/browse/BEAM-6207?focusedWorklogId=189535&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-189535 ]
ASF GitHub Bot logged work on BEAM-6207: ---------------------------------------- Author: ASF GitHub Bot Created on: 24/Jan/19 15:42 Start Date: 24/Jan/19 15:42 Worklog Time Spent: 10m Work Description: kkucharc commented on pull request #7612: [BEAM-6207] Added option to publish synthetic data to Kafka topic. URL: https://github.com/apache/beam/pull/7612#discussion_r250654128 ########## File path: sdks/java/testing/load-tests/src/main/java/org/apache/beam/sdk/loadtests/SyntheticDataPubSubPublisher.java ########## @@ -73,6 +80,11 @@ String getInsertionPipelineTopic(); void setInsertionPipelineTopic(String topic); + + @Description("Kafka server address (optional)") Review comment: I would keep the info about this being optional only in documentation. Lack of `@Validation.Required` is enough. Usually when something is not required it has `@Default.` value. But as long as we treat this pipeline option as condition whether we use Kafka or PubSub, then it can leave it as it is now or change to default value `""` and then change the condition in the 98 line. WDYT? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 189535) Time Spent: 0.5h (was: 20m) > extend "Data insertion Pipeline" with Kafka IO > ---------------------------------------------- > > Key: BEAM-6207 > URL: https://issues.apache.org/jira/browse/BEAM-6207 > Project: Beam > Issue Type: Sub-task > Components: io-java-kafka, testing > Reporter: Lukasz Gajowy > Assignee: Michal Walenia > Priority: Trivial > Time Spent: 0.5h > Remaining Estimate: 0h > > Since now we have the Data insertion pipeline based on PubSubIO, it can be > easily extended with KafkaIO if needed. Same data then could be published to > any of the sinks leaving out the choice and enabling the data insertion > pipeline for Flink. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)