[ 
https://issues.apache.org/jira/browse/BEAM-6207?focusedWorklogId=192211&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-192211
 ]

ASF GitHub Bot logged work on BEAM-6207:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Jan/19 11:41
            Start Date: 30/Jan/19 11:41
    Worklog Time Spent: 10m 
      Work Description: lgajowy commented on pull request #7612: [BEAM-6207] 
Added option to publish synthetic data to Kafka topic.
URL: https://github.com/apache/beam/pull/7612#discussion_r252219843
 
 

 ##########
 File path: 
sdks/java/testing/load-tests/src/main/java/org/apache/beam/sdk/loadtests/SyntheticDataPublisher.java
 ##########
 @@ -50,15 +54,22 @@
  * <pre>
  *  ./gradlew :beam-sdks-java-load-tests:run -PloadTest.args='
  *    --insertionPipelineTopic=TOPIC_NAME
+ *    --kafkaBootstrapServerAddress=SERVER_ADDRESS
+ *    --kafkaTopic=KAFKA_TOPIC_NAME
  *    --sourceOptions={"numRecords":1000,...}'
- *    
-PloadTest.mainClass="org.apache.beam.sdk.loadtests.SyntheticDataPubSubPublisher"
+ *    
-PloadTest.mainClass="org.apache.beam.sdk.loadtests.SyntheticDataPublisher"
  *  </pre>
+ *
+ * If parameters related to Kafka are provided, the publisher writes to Kafka. 
If both pubsub topic
+ * and Kafka params are present, records will be written to both sinks.
 
 Review comment:
   It would be even more clear if the sentence gives instructions for both 
Kafka and Pubsub, so I suggest: 
   
   ```
   If parameters related to a specific sink are provided (Kafka or PubSub), the 
pipeline writes to the sink. Writing to both sinks is also acceptable.
   ``` 
   (Or similar but including both runners)
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 192211)

> extend "Data insertion Pipeline" with Kafka IO
> ----------------------------------------------
>
>                 Key: BEAM-6207
>                 URL: https://issues.apache.org/jira/browse/BEAM-6207
>             Project: Beam
>          Issue Type: Sub-task
>          Components: io-java-kafka, testing
>            Reporter: Lukasz Gajowy
>            Assignee: Michal Walenia
>            Priority: Trivial
>          Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Since now we have the Data insertion pipeline based on PubSubIO, it can be 
> easily extended with KafkaIO if needed. Same data then could be published to 
> any of the sinks leaving out the choice and enabling the data insertion 
> pipeline for Flink.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to