gemini-code-assist[bot] commented on code in PR #37377:
URL: https://github.com/apache/beam/pull/37377#discussion_r3133415730


##########
runners/google-cloud-dataflow-java/src/test/java/org/apache/beam/runners/dataflow/options/DataflowPipelineOptionsTest.java:
##########
@@ -322,4 +322,13 @@ public void destroy() {
           TimeoutException.class, () -> 
DefaultGcpRegionFactory.getRegionFromGcloudCli(1L));
     }
   }
+
+  @Test
+  public void testDiskProvisionedIopsAndThroughput() {
+    DataflowPipelineOptions options = 
PipelineOptionsFactory.as(DataflowPipelineOptions.class);
+    options.setDiskProvisionedIops(1000);
+    options.setDiskProvisionedThroughputMibps(100);
+    assertEquals(Integer.valueOf(1000), options.getDiskProvisionedIops());
+    assertEquals(Integer.valueOf(100), 
options.getDiskProvisionedThroughputMibps());

Review Comment:
   ![high](https://www.gstatic.com/codereviewagent/high-priority.svg)
   
   These assertions will fail because `options.getDiskProvisionedIops()` 
returns a `Long`, while `Integer.valueOf(1000)` creates an `Integer`. In Java, 
an `Integer` object is never equal to a `Long` object via `.equals()`. You 
should use `Long.valueOf()` or a long literal (e.g., `1000L`) to match the 
return type of the getter.
   
   ```suggestion
       assertEquals(Long.valueOf(1000), options.getDiskProvisionedIops());
       assertEquals(Long.valueOf(100), 
options.getDiskProvisionedThroughputMibps());
   ```



##########
runners/google-cloud-dataflow-java/src/main/java/org/apache/beam/runners/dataflow/DataflowPipelineTranslator.java:
##########
@@ -489,6 +489,13 @@ public Job translate(List<DataflowPackage> packages) {
       if (options.getDiskSizeGb() > 0) {
         workerPool.setDiskSizeGb(options.getDiskSizeGb());
       }
+      if (options.getDiskProvisionedIops() != null && 
options.getDiskProvisionedIops() > 0) {
+        workerPool.setDiskProvisionedIops((long) 
options.getDiskProvisionedIops());
+      }
+      if (options.getDiskProvisionedThroughputMibps() != null && 
options.getDiskProvisionedThroughputMibps() > 0) {
+        workerPool.setDiskProvisionedThroughputMibps(
+            (long) options.getDiskProvisionedThroughputMibps());
+      }

Review Comment:
   ![high](https://www.gstatic.com/codereviewagent/high-priority.svg)
   
   The cast to `(long)` is likely incorrect and may cause a compilation error. 
Based on the Dataflow API definition (and the Python SDK changes in this PR 
using `INT32`), these fields are typically 32-bit integers. Since 
`options.getDiskProvisionedIops()` returns a `Long`, you should convert it to 
an `Integer` (e.g., using `.intValue()`) before passing it to the `workerPool` 
setter, assuming the generated client library expects an `Integer` for these 
fields. Alternatively, consider changing the option type to `Integer` in 
`DataflowPipelineWorkerPoolOptions` for consistency with other fields like 
`diskSizeGb`.
   
   ```suggestion
         if (options.getDiskProvisionedIops() != null && 
options.getDiskProvisionedIops() > 0) {
           
workerPool.setDiskProvisionedIops(options.getDiskProvisionedIops().intValue());
         }
         if (options.getDiskProvisionedThroughputMibps() != null && 
options.getDiskProvisionedThroughputMibps() > 0) {
           workerPool.setDiskProvisionedThroughputMibps(
               options.getDiskProvisionedThroughputMibps().intValue());
         }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to