mbutrovich commented on code in PR #2010:
URL: https://github.com/apache/datafusion-comet/pull/2010#discussion_r2208611091


##########
spark/src/test/scala/org/apache/comet/CometExpressionSuite.scala:
##########
@@ -2765,6 +2765,26 @@ class CometExpressionSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
     }
   }
 
+  test("randn expression with random parameters") {
+    val partitionsNumber = Random.nextInt(10) + 1

Review Comment:
   Why random values if we only run it once? I see that `"rand expression with 
random parameters"` already did this, but it feels like we could get lucky with 
a simple test case only running it one time. I guess I should double check if 
we seed this RNG (I know we modified some other tests to use fixed seeds).



##########
native/spark-expr/src/nondetermenistic_funcs/randn.rs:
##########
@@ -0,0 +1,265 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use crate::nondetermenistic_funcs::rand::XorShiftRandom;
+
+use crate::internal::{evaluate_batch_for_rand, StatefulSeedValueGenerator};
+use arrow::array::RecordBatch;
+use arrow::datatypes::{DataType, Schema};
+use datafusion::common::DataFusionError;
+use datafusion::logical_expr::ColumnarValue;
+use datafusion::physical_expr::PhysicalExpr;
+use std::any::Any;
+use std::fmt::{Display, Formatter};
+use std::hash::{Hash, Hasher};
+use std::sync::{Arc, Mutex};
+
+/// Stateful extension of the Marsaglia polar method 
(https://en.wikipedia.org/wiki/Marsaglia_polar_method)
+/// to convert uniform distribution to the standard normal one used by Apache 
Spark.
+/// For correct processing of batches having odd number of elements, we need 
to keep not used yet generated value as a part of the state.
+/// Note about Comet <-> Spark equivalence:
+/// Under the hood, the spark algorithm refers to java.util.Random relying on 
a module StrictMath. The latter uses
+/// native implementations of floating-point operations (ln, exp, sin, cos) 
and ensures
+/// they are stable across different platforms.
+/// See: 
https://github.com/openjdk/jdk/blob/07c9f7138affdf0d42ecdc30adcb854515569985/src/java.base/share/classes/java/util/Random.java#L745
+/// Yet, for the Rust standard library this stability is not guaranteed 
(https://doc.rust-lang.org/std/primitive.f64.html#method.ln)
+/// Moreover, potential usage of external library like rug 
(https://docs.rs/rug/latest/rug/) doesn't help because still there is no
+/// guarantee it matches the StrictMath jvm implementation.
+/// So, we can ensure only equivalence with some error tolerance between rust 
and spark(jvm).
+
+#[derive(Debug, Clone)]
+struct XorShiftRandomForGaussian {
+    base_generator: XorShiftRandom,
+    next_gaussian: Option<f64>,
+}
+
+impl XorShiftRandomForGaussian {
+    pub fn next_gaussian(&mut self) -> f64 {
+        if let Some(stored_value) = self.next_gaussian {
+            self.next_gaussian = None;
+            return stored_value;
+        }
+        let mut v1: f64;
+        let mut v2: f64;
+        let mut s: f64;
+        loop {
+            v1 = 2f64 * self.base_generator.next_f64() - 1f64;
+            v2 = 2f64 * self.base_generator.next_f64() - 1f64;
+            s = v1 * v1 + v2 * v2;
+            if s < 1f64 && s != 0f64 {
+                break;
+            }
+        }
+        let multiplier = (-2f64 * s.ln() / s).sqrt();
+        self.next_gaussian = Some(v2 * multiplier);
+        v1 * multiplier
+    }
+}
+
+type RandomGaussianState = (i64, Option<f64>);
+
+impl StatefulSeedValueGenerator<RandomGaussianState, f64> for 
XorShiftRandomForGaussian {
+    fn from_init_seed(init_value: i64) -> Self {
+        XorShiftRandomForGaussian {
+            base_generator: XorShiftRandom::from_init_seed(init_value),
+            next_gaussian: None,
+        }
+    }
+
+    fn from_stored_state(stored_state: RandomGaussianState) -> Self {
+        XorShiftRandomForGaussian {
+            base_generator: XorShiftRandom::from_stored_state(stored_state.0),
+            next_gaussian: stored_state.1,
+        }
+    }
+
+    fn next_value(&mut self) -> f64 {
+        self.next_gaussian()
+    }
+
+    fn get_current_state(&self) -> RandomGaussianState {
+        (self.base_generator.seed, self.next_gaussian)
+    }
+}
+
+#[derive(Debug, Clone)]
+pub struct RandnExpr {
+    seed: Arc<dyn PhysicalExpr>,
+    init_seed_shift: i32,
+    state_holder: Arc<Mutex<Option<RandomGaussianState>>>,
+}
+
+impl RandnExpr {
+    pub fn new(seed: Arc<dyn PhysicalExpr>, init_seed_shift: i32) -> Self {
+        Self {
+            seed,
+            init_seed_shift,
+            state_holder: Arc::new(Mutex::new(None)),
+        }
+    }
+}
+
+impl Display for RandnExpr {
+    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
+        write!(f, "RANDN({})", self.seed)
+    }
+}
+
+impl PartialEq for RandnExpr {
+    fn eq(&self, other: &Self) -> bool {
+        self.seed.eq(&other.seed) && self.init_seed_shift == 
other.init_seed_shift
+    }
+}
+
+impl Eq for RandnExpr {}
+
+impl Hash for RandnExpr {
+    fn hash<H: Hasher>(&self, state: &mut H) {
+        self.children().hash(state);
+    }
+}
+
+impl PhysicalExpr for RandnExpr {

Review Comment:
   Does it make sense to use `ScalarUDFImpl` instead of `PhysicalExpr`? Not 
sure if the behavior lines up or not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to