guan404ming commented on code in PR #1025:
URL: https://github.com/apache/mahout/pull/1025#discussion_r2812554496


##########
qdp/qdp-python/src/lib.rs:
##########


Review Comment:
   The empty-tensor check (input_len == 0) and null-pointer check (data_ptr_u64 
== 0) duplicate validation already performed by 
validate_cuda_tensor_for_encoding (which checks numel == 0 and is called at the 
top of the function). I think we could consider removing the redundant checks, 
or if they are intentional defense-in-depth, adding a brief comment saying so.



##########
qdp/qdp-python/src/lib.rs:
##########
@@ -1149,6 +1089,143 @@ impl QdpEngine {
             })?;
         Ok(PyQuantumLoader::new(Some(iter)))
     }
+
+    /// Encode directly from a PyTorch CUDA tensor. Internal helper.
+    ///
+    /// Dispatches to the core f32 GPU pointer API for 1D float32 amplitude 
encoding,
+    /// or to the float64/basis GPU pointer APIs for other dtypes and batch 
encoding.
+    ///
+    /// Args:
+    ///     data: PyTorch CUDA tensor
+    ///     num_qubits: Number of qubits
+    ///     encoding_method: Encoding strategy (currently only "amplitude")
+    fn _encode_from_cuda_tensor(

Review Comment:
   If I'm not misunderstanding, it means the f32 path bypasses the shape 
extraction and any future validation added to `extract_cuda_tensor_info`. Maybe 
we could extract a lighter helper or at least noting this divergence.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to