colin fang created SPARK-27759: ---------------------------------- Summary: Do not auto cast array<double> to np.array in vectorized udf Key: SPARK-27759 URL: https://issues.apache.org/jira/browse/SPARK-27759 Project: Spark Issue Type: Improvement Components: PySpark, SQL Affects Versions: 2.4.3 Reporter: colin fang
{code:java} pd_df = pd.DataFrame(\{'x': np.random.rand(11, 3, 5).tolist()}) df = spark.createDataFrame(pd_df).cache() {code} Each element in x is a list of list, as expected. {code:java} df.toPandas()['x'] # 0 [[0.08669612955959993, 0.32624430522634495, 0.... # 1 [[0.29838166086156914, 0.008550172904516762, 0... # 2 [[0.641304534802928, 0.2392047548381877, 0.555... {code} {code:java} def my_udf(x): # Hack to see what's inside a udf raise Exception(x.values.shape, x.values[0].shape, x.values[0][0].shape, np.stack(x.values).shape) return pd.Series(x.values) my_udf = pandas_udf(dot_product, returnType=DoubleType()) df.withColumn('y', my_udf('x')).show() Exception: ((2,), (3,), (5,), (2, 3)) {code} A batch (2) of `x` is converted to pd.Series, however, each element in the pd.Series is now a numpy 1d array of numpy 1d array. It is inconvenient to work with nested 1d numpy array in practice in a udf. For example, I need a ndarray of shape (2, 3, 5) in udf, so that I can make use of the numpy vectorized operations. If I was given a list of list intact, I can simply do `np.stack(x.values)`. However, it doesn't work here as what I received is a nested numpy 1d array. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org