andygrove commented on a change in pull request #8173:
URL: https://github.com/apache/arrow/pull/8173#discussion_r487438276



##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html

##########
File path: rust/arrow/src/compute/kernels/aggregate.rs
##########
@@ -121,7 +121,7 @@ mod tests {
     #[test]
     fn test_primitive_array_float_sum() {
         let a = Float64Array::from(vec![1.1, 2.2, 3.3, 4.4, 5.5]);
-        assert_eq!(16.5, sum(&a).unwrap());
+        assert!(16.5 - sum(&a).unwrap() < f64::EPSILON);

Review comment:
       The results could vary depending on the hardware we are running on. For 
now, we are assuming that everything is running on CPU but there is no reason 
why we couldn't have GPU versions of these kernels in the future, and there 
could be differences in precision in this case.
   
   https://docs.nvidia.com/cuda/floating-point/index.html




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to