[jira] [Updated] (SPARK-41833) DataFrame.collect() output parity with pyspark

2023-01-02 Thread Sandeep Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Singh updated SPARK-41833:
--
Description: 
{code:java}
**          
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1117, in pyspark.sql.connect.functions.array
Failed example:
    df.select(array('age', 'age').alias("arr")).collect()
Expected:
    [Row(arr=[2, 2]), Row(arr=[5, 5])]
Got:
    [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1119, in pyspark.sql.connect.functions.array
Failed example:
    df.select(array([df.age, df.age]).alias("arr")).collect()
Expected:
    [Row(arr=[2, 2]), Row(arr=[5, 5])]
Got:
    [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1124, in pyspark.sql.connect.functions.array_distinct
Failed example:
    df.select(array_distinct(df.data)).collect()
Expected:
    [Row(array_distinct(data)=[1, 2, 3]), Row(array_distinct(data)=[4, 5])]
Got:
    [Row(array_distinct(data)=array([1, 2, 3])), 
Row(array_distinct(data)=array([4, 5]))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1135, in pyspark.sql.connect.functions.array_except
Failed example:
    df.select(array_except(df.c1, df.c2)).collect()
Expected:
    [Row(array_except(c1, c2)=['b'])]
Got:
    [Row(array_except(c1, c2)=array(['b'], dtype=object))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1142, in pyspark.sql.connect.functions.array_intersect
Failed example:
    df.select(array_intersect(df.c1, df.c2)).collect()
Expected:
    [Row(array_intersect(c1, c2)=['a', 'c'])]
Got:
    [Row(array_intersect(c1, c2)=array(['a', 'c'], dtype=object))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1180, in pyspark.sql.connect.functions.array_remove
Failed example:
    df.select(array_remove(df.data, 1)).collect()
Expected:
    [Row(array_remove(data, 1)=[2, 3]), Row(array_remove(data, 1)=[])]
Got:
    [Row(array_remove(data, 1)=array([2, 3])), Row(array_remove(data, 
1)=array([], dtype=int64))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1187, in pyspark.sql.connect.functions.array_repeat
Failed example:
    df.select(array_repeat(df.data, 3).alias('r')).collect()
Expected:
    [Row(r=['ab', 'ab', 'ab'])]
Got:
    [Row(r=array(['ab', 'ab', 'ab'], dtype=object))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1204, in pyspark.sql.connect.functions.array_sort
Failed example:
    df.select(array_sort(df.data).alias('r')).collect()
Expected:
    [Row(r=[1, 2, 3, None]), Row(r=[1]), Row(r=[])]
Got:
    [Row(r=array([ 1.,  2.,  3., nan])), Row(r=array([1])), Row(r=array([], 
dtype=int64))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1207, in pyspark.sql.connect.functions.array_sort
Failed example:
    df.select(array_sort(
        "data",
        lambda x, y: when(x.isNull() | y.isNull(), lit(0)).otherwise(length(y) 
- length(x))
    ).alias("r")).collect()
Expected:
    [Row(r=['foobar', 'foo', None, 'bar']), Row(r=['foo']), Row(r=[])]
Got:
    [Row(r=array(['foobar', 'foo', None, 'bar'], dtype=object)), 
Row(r=array(['foo'], dtype=object)), Row(r=array([], dtype=object))]
**
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1209, in pyspark.sql.connect.functions.array_union
Failed example:
    df.select(array_union(df.c1, df.c2)).collect()
Expected:
    [Row(array_union(c1, c2)=['b', 'a', 'c', 'd', 'f'])]
Got:
    [Row(array_union(c1, c2)=array(['b', 'a', 'c', 'd', 'f'], 
dtype=object))]{code}

  was:
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1117, in pyspark.sql.connect.functions.array
Failed example:
    df.select(array('age', 'age').alias("arr")).collect()
Expected:
    [Row(arr=[2, 2]), Row(arr=[5, 5])]
Got:
    [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]{code}


> DataFrame.collect() output 

[jira] [Updated] (SPARK-41833) DataFrame.collect() output parity with pyspark

2023-01-02 Thread Sandeep Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Singh updated SPARK-41833:
--
Description: 
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1117, in pyspark.sql.connect.functions.array
Failed example:
    df.select(array('age', 'age').alias("arr")).collect()
Expected:
    [Row(arr=[2, 2]), Row(arr=[5, 5])]
Got:
    [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]{code}

  was:
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 584, in pyspark.sql.connect.dataframe.DataFrame.unionByName
Failed example:
    df1.unionByName(df2).show()
Expected:
    ++++
    |col0|col1|col2|
    ++++
    |   1|   2|   3|
    |   6|   4|   5|
    ++++
Got:
    ++++
    |col0|col1|col2|
    ++++
    |   1|   2|   3|
    |   4|   5|   6|
    ++++
    {code}


> DataFrame.collect() output parity with pyspark
> --
>
> Key: SPARK-41833
> URL: https://issues.apache.org/jira/browse/SPARK-41833
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1117, in pyspark.sql.connect.functions.array
> Failed example:
>     df.select(array('age', 'age').alias("arr")).collect()
> Expected:
>     [Row(arr=[2, 2]), Row(arr=[5, 5])]
> Got:
>     [Row(arr=array([2, 2])), Row(arr=array([5, 5]))]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org