Naireen commented on code in PR #31347:
URL: https://github.com/apache/beam/pull/31347#discussion_r1617897202


##########
sdks/java/io/kafka/src/test/java/org/apache/beam/sdk/io/kafka/KafkaIOTest.java:
##########
@@ -616,6 +624,58 @@ public void testRiskyConfigurationWarnsProperly() {
     p.run();
   }
 
+  @Test
+  public void testRiskyConfigurationWarnsProperlyWithNumShardsNotSet() {
+    int numElements = 1000;
+
+    PCollection<Long> input =
+        p.apply(
+                mkKafkaReadTransform(numElements, numElements, new 
ValueAsTimestampFn(), true, 0)
+                    .withConsumerConfigUpdates(
+                        
ImmutableMap.of(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true))
+                    .withoutMetadata())
+            .apply(Values.create());
+
+    addCountingAsserts(input, numElements);
+
+    kafkaIOExpectedLogs.verifyWarn(
+        "This will redistribute the load across the same number of shards as 
the Kafka source.");

Review Comment:
   You're correct, I have removed this comment, and updated it to 
   
   "This will create a key per record, which is sub-optimal for most use 
cases." -> this is still an implementation detail, but is a beam implementation 
detail rather than from the underlying runner. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@beam.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to