[jira] [Created] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Kent Yao (Jira)
Kent Yao created SPARK-42394:


 Summary: Fix the usage information of bin/spark-sql --help
 Key: SPARK-42394
 URL: https://issues.apache.org/jira/browse/SPARK-42394
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.4.0
Reporter: Kent Yao






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Kent Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao updated SPARK-42394:
-
Description: It tries to connect to HMS and fail with noisy errors

> Fix the usage information of bin/spark-sql --help
> -
>
> Key: SPARK-42394
> URL: https://issues.apache.org/jira/browse/SPARK-42394
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Kent Yao
>Priority: Major
>
> It tries to connect to HMS and fail with noisy errors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42394:


Assignee: (was: Apache Spark)

> Fix the usage information of bin/spark-sql --help
> -
>
> Key: SPARK-42394
> URL: https://issues.apache.org/jira/browse/SPARK-42394
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Kent Yao
>Priority: Major
>
> It tries to connect to HMS and fail with noisy errors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42394:


Assignee: Apache Spark

> Fix the usage information of bin/spark-sql --help
> -
>
> Key: SPARK-42394
> URL: https://issues.apache.org/jira/browse/SPARK-42394
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Kent Yao
>Assignee: Apache Spark
>Priority: Major
>
> It tries to connect to HMS and fail with noisy errors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17686969#comment-17686969
 ] 

Apache Spark commented on SPARK-42394:
--

User 'yaooqinn' has created a pull request for this issue:
https://github.com/apache/spark/pull/39966

> Fix the usage information of bin/spark-sql --help
> -
>
> Key: SPARK-42394
> URL: https://issues.apache.org/jira/browse/SPARK-42394
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Kent Yao
>Priority: Major
>
> It tries to connect to HMS and fail with noisy errors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42395) The code logic of the configmap max size validation lacks extra content

2023-02-10 Thread Wei Yan (Jira)
Wei Yan created SPARK-42395:
---

 Summary: The code logic of the configmap max size validation lacks 
extra content
 Key: SPARK-42395
 URL: https://issues.apache.org/jira/browse/SPARK-42395
 Project: Spark
  Issue Type: Bug
  Components: Kubernetes
Affects Versions: 3.5.0
Reporter: Wei Yan
 Fix For: 3.3.1


In each configmap, Spark adds an extra content in a fixed format,this extra 
content of the configmap is as belows:
  spark.kubernetes.namespace: default
  spark.properties: |
    #Java properties built from Kubernetes config map with name: 
spark-exec-b47b438630eec12d-conf-map
    #Wed Feb 08 20:10:19 CST 2023
    spark.kubernetes.namespace=default

But the max size validation code logic does not consider this part 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42395) The code logic of the configmap max size validation lacks extra content

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17686997#comment-17686997
 ] 

Apache Spark commented on SPARK-42395:
--

User 'ninebigbig' has created a pull request for this issue:
https://github.com/apache/spark/pull/39967

> The code logic of the configmap max size validation lacks extra content
> ---
>
> Key: SPARK-42395
> URL: https://issues.apache.org/jira/browse/SPARK-42395
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.5.0
>Reporter: Wei Yan
>Priority: Major
> Fix For: 3.3.1
>
>
> In each configmap, Spark adds an extra content in a fixed format,this extra 
> content of the configmap is as belows:
>   spark.kubernetes.namespace: default
>   spark.properties: |
>     #Java properties built from Kubernetes config map with name: 
> spark-exec-b47b438630eec12d-conf-map
>     #Wed Feb 08 20:10:19 CST 2023
>     spark.kubernetes.namespace=default
> But the max size validation code logic does not consider this part 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42395) The code logic of the configmap max size validation lacks extra content

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42395:


Assignee: Apache Spark

> The code logic of the configmap max size validation lacks extra content
> ---
>
> Key: SPARK-42395
> URL: https://issues.apache.org/jira/browse/SPARK-42395
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.5.0
>Reporter: Wei Yan
>Assignee: Apache Spark
>Priority: Major
> Fix For: 3.3.1
>
>
> In each configmap, Spark adds an extra content in a fixed format,this extra 
> content of the configmap is as belows:
>   spark.kubernetes.namespace: default
>   spark.properties: |
>     #Java properties built from Kubernetes config map with name: 
> spark-exec-b47b438630eec12d-conf-map
>     #Wed Feb 08 20:10:19 CST 2023
>     spark.kubernetes.namespace=default
> But the max size validation code logic does not consider this part 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42395) The code logic of the configmap max size validation lacks extra content

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42395:


Assignee: (was: Apache Spark)

> The code logic of the configmap max size validation lacks extra content
> ---
>
> Key: SPARK-42395
> URL: https://issues.apache.org/jira/browse/SPARK-42395
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 3.5.0
>Reporter: Wei Yan
>Priority: Major
> Fix For: 3.3.1
>
>
> In each configmap, Spark adds an extra content in a fixed format,this extra 
> content of the configmap is as belows:
>   spark.kubernetes.namespace: default
>   spark.properties: |
>     #Java properties built from Kubernetes config map with name: 
> spark-exec-b47b438630eec12d-conf-map
>     #Wed Feb 08 20:10:19 CST 2023
>     spark.kubernetes.namespace=default
> But the max size validation code logic does not consider this part 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Jira
Bjørn Jørgensen created SPARK-42396:
---

 Summary: Upgrade Apache Kafka to 3.4.0
 Key: SPARK-42396
 URL: https://issues.apache.org/jira/browse/SPARK-42396
 Project: Spark
  Issue Type: Dependency upgrade
  Components: Build
Affects Versions: 3.5.0
Reporter: Bjørn Jørgensen


[https://www.cve.org/CVERecord?id=CVE-2023-25194|CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjørn Jørgensen updated SPARK-42396:

Description: 
[CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]  (was: 
[https://www.cve.org/CVERecord?id=CVE-2023-25194|CVE-2023-25194])

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42397) Inconsistent data produced by `FlatMapCoGroupsInPandas`

2023-02-10 Thread Ted Chester Jenks (Jira)
Ted Chester Jenks created SPARK-42397:
-

 Summary: Inconsistent data produced by `FlatMapCoGroupsInPandas`
 Key: SPARK-42397
 URL: https://issues.apache.org/jira/browse/SPARK-42397
 Project: Spark
  Issue Type: Bug
  Components: Pandas API on Spark, SQL
Affects Versions: 3.3.1, 3.3.0
Reporter: Ted Chester Jenks


We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example:

```

    test_df = spark.createDataFrame(
        [
            ["1", "23", "abc", "blah", "def", "1"],
            ["1", "23", "abc", "blah", "def", "1"],
            ["1", "23", "abc", "blah", "def", "2"],
            ["1", "23", "abc", "blah", "def", "2"],
        ],
        ["cluster", "partition", "event", "abc", "def", "one_or_two"]
    )
    df1 = test_df.filter(
        F.col("one_or_two") == "1"
    ).select(
        "cluster", "event", "abc"
    )

    df2 = test_df.filter(
        F.col("one_or_two") == "2"
    ).select(
        "cluster", "event", "def"
    )
    def get_schema(l, r):
            return pd.DataFrame(
                [(str(l.columns), str(r.columns))],
                columns=["left_colms", "right_colms"]
            )


   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(
        get_schema, "left_colms string, right_colms string"
    )
    grouped_df_1 = grouped_df.withColumn(
       "xyz", F.lit("1234")
     )

```

When we call `grouped_df.collect()` we get:

```

[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] 

```

When we call `grouped_df.show(5, truncate=False)` we get:

```

+-+--+
|left_colms                               |right_colms                          
             |
+-+--+
|Index(['cluster', 'abc'], dtype='object')|Index(['cluster', 'event', 'def'], 
dtype='object')|
+-+--+

```

When we call `grouped_df_1.collect()` we get:

```

[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 

```

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42397) Inconsistent data produced by `FlatMapCoGroupsInPandas`

2023-02-10 Thread Ted Chester Jenks (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Chester Jenks updated SPARK-42397:
--
Description: 
We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example:

{{    test_df = spark.createDataFrame(}}
{{        [}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{        ],}}
{{        ["cluster", "partition", "event", "abc", "def", "one_or_two"]}}
{{    )}}
{{    df1 = test_df.filter(}}
{{        F.col("one_or_two") == "1"}}
{{    ).select(}}
{{        "cluster", "event", "abc"}}
{{    )}}{{    df2 = test_df.filter(}}
{{        F.col("one_or_two") == "2"}}
{{    ).select(}}
{{        "cluster", "event", "def"}}
{{    )}}
{{    def get_schema(l, r):}}
{{            return pd.DataFrame(}}
{{                [(str(l.columns), str(r.columns))],}}
{{                columns=["left_colms", "right_colms"]}}
{{            )}}{{   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(}}
{{        get_schema, "left_colms string, right_colms string"}}
{{    )}}
{{    grouped_df_1 = grouped_df.withColumn(}}
{{       "xyz", F.lit("1234")}}
{{     )}}

When we call `grouped_df.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}

 

When we call `grouped_df.show(5, truncate=False)` we get:

 

{{{}{+}{-}{-}{+}-+{}}}{{{}{}}}
|left_colms                              |right_colms                           
           |

{{{}{}}}{{{}{+}{-}{-}{+}-+{}}}{{{}{}}}
|Index(['cluster', 'abc'], dtype='object')|Index(['cluster', 'event', 'def'], 
dtype='object')|

{{{}{}}}{{{}{+}{-}{-}{+}-+{}}}

 

When we call `grouped_df_1.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

 

  was:
We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example:

```

    test_df = spark.createDataFrame(
        [
            ["1", "23", "abc", "blah", "def", "1"],
            ["1", "23", "abc", "blah", "def", "1"],
            ["1", "23", "abc", "blah", "def", "2"],
            ["1", "23", "abc", "blah", "def", "2"],
        ],
        ["cluster", "partition", "event", "abc", "def", "one_or_two"]
    )
    df1 = test_df.filter(
        F.col("one_or_two") == "1"
    ).select(
        "cluster", "event", "abc"
    )

    df2 = test_df.filter(
        F.col("one_or_two") == "2"
    ).select(
        "cluster", "event", "def"
    )
    def get_schema(l, r):
            return pd.DataFrame(
                [(str(l.columns), str(r.columns))],
                columns=["left_colms", "right_colms"]
            )


   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(
        get_schema, "left_colms string, right_colms string"
    )
    grouped_df_1 = grouped_df.withColumn(
       "xyz", F.lit("1234")
     )

```

When we call `grouped_df.collect()` we get:

```

[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] 

```

When we call `grouped_df.show(5, truncate=False)` we get:

```

+-+--+
|left_colms                               |right_colms                          
             |
+-+--+
|Index(['cluster', 'abc'], dtype='object')|Index(['cluster', 'event', 'def'], 
dtype='object')|
+-+--+

```

When we call `grouped_df_1.collect()` we get:

```

[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 

```

 


> Inconsistent data produced by `FlatMapCoGroupsInPandas`
> ---
>
> Key: SPARK-42397
> URL: https://issues.apache.org/jira/browse/SPARK-42397
> Project: Spark
>  Issue Type: Bug
>  Components: Pandas API on Spark, SQL
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Ted Chester Jenks
>Priority: Minor
>
> We are seeing inconsistent data returned when using 

[jira] [Commented] (SPARK-42397) Inconsistent data produced by `FlatMapCoGroupsInPandas`

2023-02-10 Thread Ted Chester Jenks (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687011#comment-17687011
 ] 

Ted Chester Jenks commented on SPARK-42397:
---

{{    test_df = spark.createDataFrame(}}
{{        [}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{        ],}}
{{        ["cluster", "partition", "event", "abc", "def", "one_or_two"]}}
{{    )}}
{{    df1 = test_df.filter(}}
{{        F.col("one_or_two") == "1"}}
{{    ).select(}}
{{        "cluster", "event", "abc"}}
{{    )}}{{    df2 = test_df.filter(}}
{{        F.col("one_or_two") == "2"}}
{{    ).select(}}
{{        "cluster", "event", "def"}}
{{    )}}
{{    def get_schema(l, r):}}
{{            return pd.DataFrame(}}
{{                [(str(l.columns), str(r.columns))],}}
{{                columns=["left_colms", "right_colms"]}}
{{            )}}{{   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(}}
{{        get_schema, "left_colms string, right_colms string"}}
{{    )}}
{{    grouped_df_1 = grouped_df.withColumn(}}
{{       "xyz", F.lit("1234")}}
{{     )}}

> Inconsistent data produced by `FlatMapCoGroupsInPandas`
> ---
>
> Key: SPARK-42397
> URL: https://issues.apache.org/jira/browse/SPARK-42397
> Project: Spark
>  Issue Type: Bug
>  Components: Pandas API on Spark, SQL
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Ted Chester Jenks
>Priority: Minor
>
> We are seeing inconsistent data returned when using 
> `FlatMapCoGroupsInPandas`. In the PySpark example:
> {{    test_df = spark.createDataFrame(}}
> {{        [}}
> {{            ["1", "23", "abc", "blah", "def", "1"],}}
> {{            ["1", "23", "abc", "blah", "def", "1"],}}
> {{            ["1", "23", "abc", "blah", "def", "2"],}}
> {{            ["1", "23", "abc", "blah", "def", "2"],}}
> {{        ],}}
> {{        ["cluster", "partition", "event", "abc", "def", "one_or_two"]}}
> {{    )}}
> {{    df1 = test_df.filter(}}
> {{        F.col("one_or_two") == "1"}}
> {{    ).select(}}
> {{        "cluster", "event", "abc"}}
> {{    )}}{{    df2 = test_df.filter(}}
> {{        F.col("one_or_two") == "2"}}
> {{    ).select(}}
> {{        "cluster", "event", "def"}}
> {{    )}}
> {{    def get_schema(l, r):}}
> {{            return pd.DataFrame(}}
> {{                [(str(l.columns), str(r.columns))],}}
> {{                columns=["left_colms", "right_colms"]}}
> {{            )}}{{   grouped_df = 
> df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(}}
> {{        get_schema, "left_colms string, right_colms string"}}
> {{    )}}
> {{    grouped_df_1 = grouped_df.withColumn(}}
> {{       "xyz", F.lit("1234")}}
> {{     )}}
> When we call `grouped_df.collect()` we get:
>  
> {{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
> right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}
>  
> When we call `grouped_df.show(5, truncate=False)` we get:
>  
> {{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
> right_colms="Index(['cluster', 'event', 'def'], dtype='object')", 
> xyz='1234')] }}
>  
> When we call `grouped_df_1.collect()` we get:
>  
> {{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
> right_colms="Index(['cluster', 'event', 'def'], dtype='object')", 
> xyz='1234')] }}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42397) Inconsistent data produced by `FlatMapCoGroupsInPandas`

2023-02-10 Thread Ted Chester Jenks (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Chester Jenks updated SPARK-42397:
--
Description: 
We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example from the comments, when we call `grouped_df.collect()` 
we get:

 

{{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}

 

When we call `grouped_df.show(5, truncate=False)` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

When we call `grouped_df_1.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

 

  was:
We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example:

{{    test_df = spark.createDataFrame(}}
{{        [}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{        ],}}
{{        ["cluster", "partition", "event", "abc", "def", "one_or_two"]}}
{{    )}}
{{    df1 = test_df.filter(}}
{{        F.col("one_or_two") == "1"}}
{{    ).select(}}
{{        "cluster", "event", "abc"}}
{{    )}}{{    df2 = test_df.filter(}}
{{        F.col("one_or_two") == "2"}}
{{    ).select(}}
{{        "cluster", "event", "def"}}
{{    )}}
{{    def get_schema(l, r):}}
{{            return pd.DataFrame(}}
{{                [(str(l.columns), str(r.columns))],}}
{{                columns=["left_colms", "right_colms"]}}
{{            )}}{{   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(}}
{{        get_schema, "left_colms string, right_colms string"}}
{{    )}}
{{    grouped_df_1 = grouped_df.withColumn(}}
{{       "xyz", F.lit("1234")}}
{{     )}}

When we call `grouped_df.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}

 

When we call `grouped_df.show(5, truncate=False)` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

When we call `grouped_df_1.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

 


> Inconsistent data produced by `FlatMapCoGroupsInPandas`
> ---
>
> Key: SPARK-42397
> URL: https://issues.apache.org/jira/browse/SPARK-42397
> Project: Spark
>  Issue Type: Bug
>  Components: Pandas API on Spark, SQL
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Ted Chester Jenks
>Priority: Minor
>
> We are seeing inconsistent data returned when using 
> `FlatMapCoGroupsInPandas`. In the PySpark example from the comments, when we 
> call `grouped_df.collect()` we get:
>  
> {{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
> right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}
>  
> When we call `grouped_df.show(5, truncate=False)` we get:
>  
> {{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
> right_colms="Index(['cluster', 'event', 'def'], dtype='object')", 
> xyz='1234')] }}
>  
> When we call `grouped_df_1.collect()` we get:
>  
> {{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
> right_colms="Index(['cluster', 'event', 'def'], dtype='object')", 
> xyz='1234')] }}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42397) Inconsistent data produced by `FlatMapCoGroupsInPandas`

2023-02-10 Thread Ted Chester Jenks (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Chester Jenks updated SPARK-42397:
--
Description: 
We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example:

{{    test_df = spark.createDataFrame(}}
{{        [}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{        ],}}
{{        ["cluster", "partition", "event", "abc", "def", "one_or_two"]}}
{{    )}}
{{    df1 = test_df.filter(}}
{{        F.col("one_or_two") == "1"}}
{{    ).select(}}
{{        "cluster", "event", "abc"}}
{{    )}}{{    df2 = test_df.filter(}}
{{        F.col("one_or_two") == "2"}}
{{    ).select(}}
{{        "cluster", "event", "def"}}
{{    )}}
{{    def get_schema(l, r):}}
{{            return pd.DataFrame(}}
{{                [(str(l.columns), str(r.columns))],}}
{{                columns=["left_colms", "right_colms"]}}
{{            )}}{{   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(}}
{{        get_schema, "left_colms string, right_colms string"}}
{{    )}}
{{    grouped_df_1 = grouped_df.withColumn(}}
{{       "xyz", F.lit("1234")}}
{{     )}}

When we call `grouped_df.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}

 

When we call `grouped_df.show(5, truncate=False)` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

When we call `grouped_df_1.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

 

  was:
We are seeing inconsistent data returned when using `FlatMapCoGroupsInPandas`. 
In the PySpark example:

{{    test_df = spark.createDataFrame(}}
{{        [}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "1"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{            ["1", "23", "abc", "blah", "def", "2"],}}
{{        ],}}
{{        ["cluster", "partition", "event", "abc", "def", "one_or_two"]}}
{{    )}}
{{    df1 = test_df.filter(}}
{{        F.col("one_or_two") == "1"}}
{{    ).select(}}
{{        "cluster", "event", "abc"}}
{{    )}}{{    df2 = test_df.filter(}}
{{        F.col("one_or_two") == "2"}}
{{    ).select(}}
{{        "cluster", "event", "def"}}
{{    )}}
{{    def get_schema(l, r):}}
{{            return pd.DataFrame(}}
{{                [(str(l.columns), str(r.columns))],}}
{{                columns=["left_colms", "right_colms"]}}
{{            )}}{{   grouped_df = 
df1.groupBy("cluster").cogroup(df2.groupBy("cluster")).applyInPandas(}}
{{        get_schema, "left_colms string, right_colms string"}}
{{    )}}
{{    grouped_df_1 = grouped_df.withColumn(}}
{{       "xyz", F.lit("1234")}}
{{     )}}

When we call `grouped_df.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'event', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')")] }}

 

When we call `grouped_df.show(5, truncate=False)` we get:

 

{{{}{+}{-}{-}{+}-+{}}}{{{}{}}}
|left_colms                              |right_colms                           
           |

{{{}{}}}{{{}{+}{-}{-}{+}-+{}}}{{{}{}}}
|Index(['cluster', 'abc'], dtype='object')|Index(['cluster', 'event', 'def'], 
dtype='object')|

{{{}{}}}{{{}{+}{-}{-}{+}-+{}}}

 

When we call `grouped_df_1.collect()` we get:

 

{{[Row(left_colms="Index(['cluster', 'abc'], dtype='object')", 
right_colms="Index(['cluster', 'event', 'def'], dtype='object')", xyz='1234')] 
}}

 

 


> Inconsistent data produced by `FlatMapCoGroupsInPandas`
> ---
>
> Key: SPARK-42397
> URL: https://issues.apache.org/jira/browse/SPARK-42397
> Project: Spark
>  Issue Type: Bug
>  Components: Pandas API on Spark, SQL
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Ted Chester Jenks
>Priority: Minor
>
> We are seeing inconsistent data returned when using 
> `FlatMapCoGroupsInPandas`. In the PySpark example:
> {{    test_df = spark.createDataFrame(}}
> {{        [}}
> {{            ["1", "23", "abc", "blah", "def", "1"],}}
> {{            ["1", "23", "abc", "blah", "def", "1"],}}

[jira] [Commented] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Jira


[ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687029#comment-17687029
 ] 

Bjørn Jørgensen commented on SPARK-42396:
-

[error] 
/home/runner/work/spark/spark/connector/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaRDDSuite.scala:106:11:
 type mismatch;
[error]  found   : Int(2147483647)
[error]  required: kafka.log.ProducerStateManagerConfig
[error]   Int.MaxValue,
[error]   ^
[error] one error found

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Kent Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao resolved SPARK-42394.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 39966
[https://github.com/apache/spark/pull/39966]

> Fix the usage information of bin/spark-sql --help
> -
>
> Key: SPARK-42394
> URL: https://issues.apache.org/jira/browse/SPARK-42394
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
> Fix For: 3.4.0
>
>
> It tries to connect to HMS and fail with noisy errors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42394) Fix the usage information of bin/spark-sql --help

2023-02-10 Thread Kent Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao reassigned SPARK-42394:


Assignee: Kent Yao

> Fix the usage information of bin/spark-sql --help
> -
>
> Key: SPARK-42394
> URL: https://issues.apache.org/jira/browse/SPARK-42394
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>
> It tries to connect to HMS and fail with noisy errors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42398) refine default column value framework

2023-02-10 Thread Wenchen Fan (Jira)
Wenchen Fan created SPARK-42398:
---

 Summary: refine default column value framework
 Key: SPARK-42398
 URL: https://issues.apache.org/jira/browse/SPARK-42398
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.4.0
Reporter: Wenchen Fan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42398) refine default column value framework

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42398:


Assignee: Apache Spark

> refine default column value framework
> -
>
> Key: SPARK-42398
> URL: https://issues.apache.org/jira/browse/SPARK-42398
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Wenchen Fan
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42398) refine default column value framework

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42398:


Assignee: (was: Apache Spark)

> refine default column value framework
> -
>
> Key: SPARK-42398
> URL: https://issues.apache.org/jira/browse/SPARK-42398
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Wenchen Fan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42398) refine default column value framework

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687124#comment-17687124
 ] 

Apache Spark commented on SPARK-42398:
--

User 'cloud-fan' has created a pull request for this issue:
https://github.com/apache/spark/pull/39942

> refine default column value framework
> -
>
> Key: SPARK-42398
> URL: https://issues.apache.org/jira/browse/SPARK-42398
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Wenchen Fan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-42162) Memory usage on executors increased drastically for a complex query with large number of addition operations

2023-02-10 Thread Wenchen Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan resolved SPARK-42162.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 39722
[https://github.com/apache/spark/pull/39722]

> Memory usage on executors increased drastically for a complex query with 
> large number of addition operations
> 
>
> Key: SPARK-42162
> URL: https://issues.apache.org/jira/browse/SPARK-42162
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Supun Nakandala
>Priority: Major
> Fix For: 3.4.0
>
>
> With the [recent changes|https://github.com/apache/spark/pull/37851]  in the 
> expression canonicalization, a complex query with a large number of Add 
> operations ends up consuming 10x more memory on the executors.
> The reason for this issue is that with the new changes the canonicalization 
> process ends up generating lot of intermediate objects, especially for 
> complex queries with a large number of commutative operators. In this 
> specific case, a heap histogram analysis shows that a large number of Add 
> objects use the extra memory.
> This issue does not happen before PR 
> [#37851.|https://github.com/apache/spark/pull/37851]
> The high memory usage causes the executors to lose heartbeat signals and 
> results in task failures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42162) Memory usage on executors increased drastically for a complex query with large number of addition operations

2023-02-10 Thread Wenchen Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan reassigned SPARK-42162:
---

Assignee: Supun Nakandala

> Memory usage on executors increased drastically for a complex query with 
> large number of addition operations
> 
>
> Key: SPARK-42162
> URL: https://issues.apache.org/jira/browse/SPARK-42162
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Supun Nakandala
>Assignee: Supun Nakandala
>Priority: Major
> Fix For: 3.4.0
>
>
> With the [recent changes|https://github.com/apache/spark/pull/37851]  in the 
> expression canonicalization, a complex query with a large number of Add 
> operations ends up consuming 10x more memory on the executors.
> The reason for this issue is that with the new changes the canonicalization 
> process ends up generating lot of intermediate objects, especially for 
> complex queries with a large number of commutative operators. In this 
> specific case, a heap histogram analysis shows that a large number of Add 
> objects use the extra memory.
> This issue does not happen before PR 
> [#37851.|https://github.com/apache/spark/pull/37851]
> The high memory usage causes the executors to lose heartbeat signals and 
> results in task failures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Jira


[ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687170#comment-17687170
 ] 

Bjørn Jørgensen commented on SPARK-42396:
-

CC [~dongjoon] 

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687236#comment-17687236
 ] 

Dongjoon Hyun commented on SPARK-42396:
---

Thank you for working on this.

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42399) CONV() silently overflows returning wrong results

2023-02-10 Thread Serge Rielau (Jira)
Serge Rielau created SPARK-42399:


 Summary: CONV() silently overflows returning wrong results
 Key: SPARK-42399
 URL: https://issues.apache.org/jira/browse/SPARK-42399
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.4.0
Reporter: Serge Rielau


spark-sql> SELECT 
CONV(SUBSTRING('0x',
 3), 16, 10);

18446744073709551615

Time taken: 2.114 seconds, Fetched 1 row(s)

spark-sql> set spark.sql.ansi.enabled = true;

spark.sql.ansi.enabled true

Time taken: 0.068 seconds, Fetched 1 row(s)

spark-sql> SELECT 
CONV(SUBSTRING('0x',
 3), 16, 10);

18446744073709551615

Time taken: 0.05 seconds, Fetched 1 row(s)


In ANSI mode we should raise an error for sure.
In non ANSI either an error or a NULL maybe be acceptable.

Alternatively, of course, we could consider if we can support arbitrary domains 
since the result is a STRING again. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42376) Introduce watermark propagation among operators

2023-02-10 Thread Alex Balikov (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687246#comment-17687246
 ] 

Alex Balikov commented on SPARK-42376:
--

Yay! Congrats on getting this out!

> Introduce watermark propagation among operators
> ---
>
> Key: SPARK-42376
> URL: https://issues.apache.org/jira/browse/SPARK-42376
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.5.0
>Reporter: Jungtaek Lim
>Priority: Major
>
> With introduction of SPARK-40925, we enabled workloads containing multiple 
> stateful operators in a single streaming query.
> The JIRA ticket clearly described out-of-scope, "Here we propose fixing the 
> late record filtering in stateful operators to allow chaining of stateful 
> operators {*}which do not produce delayed records (like time-interval join or 
> potentially flatMapGroupsWithState){*}".
> We identified production use case for stream-stream time-interval join 
> followed by stateful operator (e.g. window aggregation), and propose to 
> address such use case via this ticket.
> The design will be described in the PR, but the sketched idea is introducing 
> simulation of watermark propagation among operators. As of now, Spark 
> considers all stateful operators to have same input watermark and output 
> watermark, which introduced the limitation. With this ticket, we construct 
> the logic to simulate watermark propagation so that each operator can have 
> its own (input watermark, output watermark). Operators introducing delayed 
> records will produce delayed output watermark, and downstream operator can 
> take the delay into account as input watermark will be adjusted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42400) Code clean up in org.apache.spark.storage

2023-02-10 Thread Khalid Mammadov (Jira)
Khalid Mammadov created SPARK-42400:
---

 Summary: Code clean up in org.apache.spark.storage
 Key: SPARK-42400
 URL: https://issues.apache.org/jira/browse/SPARK-42400
 Project: Spark
  Issue Type: Improvement
  Components: Block Manager
Affects Versions: 3.4.0
Reporter: Khalid Mammadov






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42400) Code clean up in org.apache.spark.storage

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42400:


Assignee: (was: Apache Spark)

> Code clean up in org.apache.spark.storage
> -
>
> Key: SPARK-42400
> URL: https://issues.apache.org/jira/browse/SPARK-42400
> Project: Spark
>  Issue Type: Improvement
>  Components: Block Manager
>Affects Versions: 3.4.0
>Reporter: Khalid Mammadov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42400) Code clean up in org.apache.spark.storage

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687279#comment-17687279
 ] 

Apache Spark commented on SPARK-42400:
--

User 'khalidmammadov' has created a pull request for this issue:
https://github.com/apache/spark/pull/39932

> Code clean up in org.apache.spark.storage
> -
>
> Key: SPARK-42400
> URL: https://issues.apache.org/jira/browse/SPARK-42400
> Project: Spark
>  Issue Type: Improvement
>  Components: Block Manager
>Affects Versions: 3.4.0
>Reporter: Khalid Mammadov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42400) Code clean up in org.apache.spark.storage

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42400:


Assignee: Apache Spark

> Code clean up in org.apache.spark.storage
> -
>
> Key: SPARK-42400
> URL: https://issues.apache.org/jira/browse/SPARK-42400
> Project: Spark
>  Issue Type: Improvement
>  Components: Block Manager
>Affects Versions: 3.4.0
>Reporter: Khalid Mammadov
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42265) DataFrame.createTempView - SparkConnectGrpcException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42265:


Assignee: Apache Spark

> DataFrame.createTempView - SparkConnectGrpcException: requirement failed
> 
>
> Key: SPARK-42265
> URL: https://issues.apache.org/jira/browse/SPARK-42265
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Assignee: Apache Spark
>Priority: Major
>
> To reproduce,
> ```
> spark.range(1).filter(udf(lambda x: x)("id") >= 0).createTempView("v")
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42265) DataFrame.createTempView - SparkConnectGrpcException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687280#comment-17687280
 ] 

Apache Spark commented on SPARK-42265:
--

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/39968

> DataFrame.createTempView - SparkConnectGrpcException: requirement failed
> 
>
> Key: SPARK-42265
> URL: https://issues.apache.org/jira/browse/SPARK-42265
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> To reproduce,
> ```
> spark.range(1).filter(udf(lambda x: x)("id") >= 0).createTempView("v")
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42265) DataFrame.createTempView - SparkConnectGrpcException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42265:


Assignee: (was: Apache Spark)

> DataFrame.createTempView - SparkConnectGrpcException: requirement failed
> 
>
> Key: SPARK-42265
> URL: https://issues.apache.org/jira/browse/SPARK-42265
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> To reproduce,
> ```
> spark.range(1).filter(udf(lambda x: x)("id") >= 0).createTempView("v")
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41820) DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41820:


Assignee: Apache Spark

> DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement 
> failed
> ---
>
> Key: SPARK-41820
> URL: https://issues.apache.org/jira/browse/SPARK-41820
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Assignee: Apache Spark
>Priority: Major
>
> {code:java}
> >>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
> >>> "name"])
> >>> df.createOrReplaceGlobalTempView("people") {code}
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1292, in 
> pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
> Failed example:
>     df2.createOrReplaceGlobalTempView("people")
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File " pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView[3]>", 
> line 1, in 
>         df2.createOrReplaceGlobalTempView("people")
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1192, in createOrReplaceGlobalTempView
>         self._session.client.execute_command(command)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 459, in execute_command
>         self._execute(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 547, in _execute
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 625, in _handle_error
>         raise SparkConnectException(status.message) from None
>     pyspark.sql.connect.client.SparkConnectException: requirement failed 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41820) DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687281#comment-17687281
 ] 

Apache Spark commented on SPARK-41820:
--

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/39968

> DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement 
> failed
> ---
>
> Key: SPARK-41820
> URL: https://issues.apache.org/jira/browse/SPARK-41820
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> >>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
> >>> "name"])
> >>> df.createOrReplaceGlobalTempView("people") {code}
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1292, in 
> pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
> Failed example:
>     df2.createOrReplaceGlobalTempView("people")
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File " pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView[3]>", 
> line 1, in 
>         df2.createOrReplaceGlobalTempView("people")
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1192, in createOrReplaceGlobalTempView
>         self._session.client.execute_command(command)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 459, in execute_command
>         self._execute(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 547, in _execute
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 625, in _handle_error
>         raise SparkConnectException(status.message) from None
>     pyspark.sql.connect.client.SparkConnectException: requirement failed 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41820) DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41820:


Assignee: (was: Apache Spark)

> DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement 
> failed
> ---
>
> Key: SPARK-41820
> URL: https://issues.apache.org/jira/browse/SPARK-41820
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> >>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
> >>> "name"])
> >>> df.createOrReplaceGlobalTempView("people") {code}
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1292, in 
> pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
> Failed example:
>     df2.createOrReplaceGlobalTempView("people")
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File " pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView[3]>", 
> line 1, in 
>         df2.createOrReplaceGlobalTempView("people")
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1192, in createOrReplaceGlobalTempView
>         self._session.client.execute_command(command)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 459, in execute_command
>         self._execute(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 547, in _execute
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 625, in _handle_error
>         raise SparkConnectException(status.message) from None
>     pyspark.sql.connect.client.SparkConnectException: requirement failed 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41820) DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement failed

2023-02-10 Thread Takuya Ueshin (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takuya Ueshin updated SPARK-41820:
--
Description: 
{code:java}
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
>>> "name"])
>>> df2 = df.filter(df.age > 3)
>>> df2.createOrReplaceGlobalTempView("people") {code}
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 1292, in 
pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
Failed example:
    df2.createOrReplaceGlobalTempView("people")
Exception raised:
    Traceback (most recent call last):
      File 
"/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
 line 1350, in __run
        exec(compile(example.source, filename, "single",
      File "", 
line 1, in 
        df2.createOrReplaceGlobalTempView("people")
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 1192, in createOrReplaceGlobalTempView
        self._session.client.execute_command(command)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
459, in execute_command
        self._execute(req)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
547, in _execute
        self._handle_error(rpc_error)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
625, in _handle_error
        raise SparkConnectException(status.message) from None
    pyspark.sql.connect.client.SparkConnectException: requirement failed 

{code}

  was:
{code:java}
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
>>> "name"])
>>> df.createOrReplaceGlobalTempView("people") {code}
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 1292, in 
pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
Failed example:
    df2.createOrReplaceGlobalTempView("people")
Exception raised:
    Traceback (most recent call last):
      File 
"/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
 line 1350, in __run
        exec(compile(example.source, filename, "single",
      File "", 
line 1, in 
        df2.createOrReplaceGlobalTempView("people")
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 1192, in createOrReplaceGlobalTempView
        self._session.client.execute_command(command)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
459, in execute_command
        self._execute(req)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
547, in _execute
        self._handle_error(rpc_error)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
625, in _handle_error
        raise SparkConnectException(status.message) from None
    pyspark.sql.connect.client.SparkConnectException: requirement failed 

{code}


> DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement 
> failed
> ---
>
> Key: SPARK-41820
> URL: https://issues.apache.org/jira/browse/SPARK-41820
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> >>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
> >>> "name"])
> >>> df2 = df.filter(df.age > 3)
> >>> df2.createOrReplaceGlobalTempView("people") {code}
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1292, in 
> pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
> Failed example:
>     df2.createOrReplaceGlobalTempView("people")
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File " pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView[3]>", 
> line 1, in 
>         df2.createOrReplaceGlobalTempView("people")
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1192, in createOrReplaceGlobalTempView
>         self._session.client.execute_command(command)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 459, in execute_command
>         self._execute(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> l

[jira] [Commented] (SPARK-42265) DataFrame.createTempView - SparkConnectGrpcException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687282#comment-17687282
 ] 

Apache Spark commented on SPARK-42265:
--

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/39968

> DataFrame.createTempView - SparkConnectGrpcException: requirement failed
> 
>
> Key: SPARK-42265
> URL: https://issues.apache.org/jira/browse/SPARK-42265
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> To reproduce,
> ```
> spark.range(1).filter(udf(lambda x: x)("id") >= 0).createTempView("v")
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41820) DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement failed

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687283#comment-17687283
 ] 

Apache Spark commented on SPARK-41820:
--

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/39968

> DataFrame.createOrReplaceGlobalTempView - SparkConnectException: requirement 
> failed
> ---
>
> Key: SPARK-41820
> URL: https://issues.apache.org/jira/browse/SPARK-41820
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> >>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", 
> >>> "name"])
> >>> df2 = df.filter(df.age > 3)
> >>> df2.createOrReplaceGlobalTempView("people") {code}
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1292, in 
> pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView
> Failed example:
>     df2.createOrReplaceGlobalTempView("people")
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File " pyspark.sql.connect.dataframe.DataFrame.createOrReplaceGlobalTempView[3]>", 
> line 1, in 
>         df2.createOrReplaceGlobalTempView("people")
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1192, in createOrReplaceGlobalTempView
>         self._session.client.execute_command(command)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 459, in execute_command
>         self._execute(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 547, in _execute
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 625, in _handle_error
>         raise SparkConnectException(status.message) from None
>     pyspark.sql.connect.client.SparkConnectException: requirement failed 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42396:


Assignee: Apache Spark

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Assignee: Apache Spark
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42396:


Assignee: (was: Apache Spark)

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42396) Upgrade Apache Kafka to 3.4.0

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687295#comment-17687295
 ] 

Apache Spark commented on SPARK-42396:
--

User 'bjornjorgensen' has created a pull request for this issue:
https://github.com/apache/spark/pull/39969

> Upgrade Apache Kafka to 3.4.0
> -
>
> Key: SPARK-42396
> URL: https://issues.apache.org/jira/browse/SPARK-42396
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Bjørn Jørgensen
>Priority: Major
>
> [CVE-2023-25194|https://www.cve.org/CVERecord?id=CVE-2023-25194]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42401) Incorrect results or NPE when inserting null value using array_insert/array_append

2023-02-10 Thread Bruce Robbins (Jira)
Bruce Robbins created SPARK-42401:
-

 Summary: Incorrect results or NPE when inserting null value using 
array_insert/array_append
 Key: SPARK-42401
 URL: https://issues.apache.org/jira/browse/SPARK-42401
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.4.0, 3.5.0
Reporter: Bruce Robbins


Example:
{noformat}
create or replace temp view v1 as
select * from values
(array(1, 2, 3, 4), 5, 5),
(array(1, 2, 3, 4), 5, null)
as v1(col1,col2,col3);

select array_insert(col1, col2, col3) from v1;
{noformat}
This produces an incorrect result:
{noformat}
[1,2,3,4,5]
[1,2,3,4,0] <== should be [1,2,3,4,null]
{noformat}
A more succint example:
{noformat}
select array_insert(array(1, 2, 3, 4), 5, cast(null as int));
{noformat}
This also produces an incorrect result:
{noformat}
[1,2,3,4,0] <== should be [1,2,3,4,null]
{noformat}
Another example:
{noformat}
create or replace temp view v1 as
select * from values
(array('1', '2', '3', '4'), 5, '5'),
(array('1', '2', '3', '4'), 5, null)
as v1(col1,col2,col3);

select array_insert(col1, col2, col3) from v1;
{noformat}
The above query throws a {{NullPointerException}}:
{noformat}
23/02/10 11:08:05 ERROR SparkSQLDriver: Failed in [select array_insert(col1, 
col2, col3) from v1]
java.lang.NullPointerException
at 
org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
 Source)
at 
org.apache.spark.sql.execution.LocalTableScanExec.$anonfun$unsafeRows$1(LocalTableScanExec.scala:44)
{noformat}
{{array_append}} has the same issue:
{noformat}
spark-sql> select array_append(array(1, 2, 3, 4), cast(null as int));
[1,2,3,4,0] <== should be [1,2,3,4,null]
Time taken: 3.679 seconds, Fetched 1 row(s)
spark-sql> select array_append(array('1', '2', '3', '4'), cast(null as string));
23/02/10 11:13:36 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.NullPointerException
at 
org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown
 Source)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42401) Incorrect results or NPE when inserting null value using array_insert/array_append

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42401:


Assignee: Apache Spark

> Incorrect results or NPE when inserting null value using 
> array_insert/array_append
> --
>
> Key: SPARK-42401
> URL: https://issues.apache.org/jira/browse/SPARK-42401
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Bruce Robbins
>Assignee: Apache Spark
>Priority: Major
>
> Example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array(1, 2, 3, 4), 5, 5),
> (array(1, 2, 3, 4), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> This produces an incorrect result:
> {noformat}
> [1,2,3,4,5]
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> A more succint example:
> {noformat}
> select array_insert(array(1, 2, 3, 4), 5, cast(null as int));
> {noformat}
> This also produces an incorrect result:
> {noformat}
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> Another example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array('1', '2', '3', '4'), 5, '5'),
> (array('1', '2', '3', '4'), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> The above query throws a {{NullPointerException}}:
> {noformat}
> 23/02/10 11:08:05 ERROR SparkSQLDriver: Failed in [select array_insert(col1, 
> col2, col3) from v1]
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>   at 
> org.apache.spark.sql.execution.LocalTableScanExec.$anonfun$unsafeRows$1(LocalTableScanExec.scala:44)
> {noformat}
> {{array_append}} has the same issue:
> {noformat}
> spark-sql> select array_append(array(1, 2, 3, 4), cast(null as int));
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> Time taken: 3.679 seconds, Fetched 1 row(s)
> spark-sql> select array_append(array('1', '2', '3', '4'), cast(null as 
> string));
> 23/02/10 11:13:36 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown
>  Source)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42401) Incorrect results or NPE when inserting null value using array_insert/array_append

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687315#comment-17687315
 ] 

Apache Spark commented on SPARK-42401:
--

User 'bersprockets' has created a pull request for this issue:
https://github.com/apache/spark/pull/39970

> Incorrect results or NPE when inserting null value using 
> array_insert/array_append
> --
>
> Key: SPARK-42401
> URL: https://issues.apache.org/jira/browse/SPARK-42401
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Bruce Robbins
>Priority: Major
>
> Example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array(1, 2, 3, 4), 5, 5),
> (array(1, 2, 3, 4), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> This produces an incorrect result:
> {noformat}
> [1,2,3,4,5]
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> A more succint example:
> {noformat}
> select array_insert(array(1, 2, 3, 4), 5, cast(null as int));
> {noformat}
> This also produces an incorrect result:
> {noformat}
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> Another example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array('1', '2', '3', '4'), 5, '5'),
> (array('1', '2', '3', '4'), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> The above query throws a {{NullPointerException}}:
> {noformat}
> 23/02/10 11:08:05 ERROR SparkSQLDriver: Failed in [select array_insert(col1, 
> col2, col3) from v1]
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>   at 
> org.apache.spark.sql.execution.LocalTableScanExec.$anonfun$unsafeRows$1(LocalTableScanExec.scala:44)
> {noformat}
> {{array_append}} has the same issue:
> {noformat}
> spark-sql> select array_append(array(1, 2, 3, 4), cast(null as int));
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> Time taken: 3.679 seconds, Fetched 1 row(s)
> spark-sql> select array_append(array('1', '2', '3', '4'), cast(null as 
> string));
> 23/02/10 11:13:36 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown
>  Source)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42401) Incorrect results or NPE when inserting null value using array_insert/array_append

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42401:


Assignee: (was: Apache Spark)

> Incorrect results or NPE when inserting null value using 
> array_insert/array_append
> --
>
> Key: SPARK-42401
> URL: https://issues.apache.org/jira/browse/SPARK-42401
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Bruce Robbins
>Priority: Major
>
> Example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array(1, 2, 3, 4), 5, 5),
> (array(1, 2, 3, 4), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> This produces an incorrect result:
> {noformat}
> [1,2,3,4,5]
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> A more succint example:
> {noformat}
> select array_insert(array(1, 2, 3, 4), 5, cast(null as int));
> {noformat}
> This also produces an incorrect result:
> {noformat}
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> Another example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array('1', '2', '3', '4'), 5, '5'),
> (array('1', '2', '3', '4'), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> The above query throws a {{NullPointerException}}:
> {noformat}
> 23/02/10 11:08:05 ERROR SparkSQLDriver: Failed in [select array_insert(col1, 
> col2, col3) from v1]
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>   at 
> org.apache.spark.sql.execution.LocalTableScanExec.$anonfun$unsafeRows$1(LocalTableScanExec.scala:44)
> {noformat}
> {{array_append}} has the same issue:
> {noformat}
> spark-sql> select array_append(array(1, 2, 3, 4), cast(null as int));
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> Time taken: 3.679 seconds, Fetched 1 row(s)
> spark-sql> select array_append(array('1', '2', '3', '4'), cast(null as 
> string));
> 23/02/10 11:13:36 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown
>  Source)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42401) Incorrect results or NPE when inserting null value using array_insert/array_append

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687316#comment-17687316
 ] 

Apache Spark commented on SPARK-42401:
--

User 'bersprockets' has created a pull request for this issue:
https://github.com/apache/spark/pull/39970

> Incorrect results or NPE when inserting null value using 
> array_insert/array_append
> --
>
> Key: SPARK-42401
> URL: https://issues.apache.org/jira/browse/SPARK-42401
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Bruce Robbins
>Priority: Major
>
> Example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array(1, 2, 3, 4), 5, 5),
> (array(1, 2, 3, 4), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> This produces an incorrect result:
> {noformat}
> [1,2,3,4,5]
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> A more succint example:
> {noformat}
> select array_insert(array(1, 2, 3, 4), 5, cast(null as int));
> {noformat}
> This also produces an incorrect result:
> {noformat}
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> Another example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array('1', '2', '3', '4'), 5, '5'),
> (array('1', '2', '3', '4'), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> The above query throws a {{NullPointerException}}:
> {noformat}
> 23/02/10 11:08:05 ERROR SparkSQLDriver: Failed in [select array_insert(col1, 
> col2, col3) from v1]
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>   at 
> org.apache.spark.sql.execution.LocalTableScanExec.$anonfun$unsafeRows$1(LocalTableScanExec.scala:44)
> {noformat}
> {{array_append}} has the same issue:
> {noformat}
> spark-sql> select array_append(array(1, 2, 3, 4), cast(null as int));
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> Time taken: 3.679 seconds, Fetched 1 row(s)
> spark-sql> select array_append(array('1', '2', '3', '4'), cast(null as 
> string));
> 23/02/10 11:13:36 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown
>  Source)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42401) Incorrect results or NPE when inserting null value using array_insert/array_append

2023-02-10 Thread Bruce Robbins (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruce Robbins updated SPARK-42401:
--
Labels: correctness  (was: )

> Incorrect results or NPE when inserting null value using 
> array_insert/array_append
> --
>
> Key: SPARK-42401
> URL: https://issues.apache.org/jira/browse/SPARK-42401
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Bruce Robbins
>Priority: Major
>  Labels: correctness
>
> Example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array(1, 2, 3, 4), 5, 5),
> (array(1, 2, 3, 4), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> This produces an incorrect result:
> {noformat}
> [1,2,3,4,5]
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> A more succint example:
> {noformat}
> select array_insert(array(1, 2, 3, 4), 5, cast(null as int));
> {noformat}
> This also produces an incorrect result:
> {noformat}
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> {noformat}
> Another example:
> {noformat}
> create or replace temp view v1 as
> select * from values
> (array('1', '2', '3', '4'), 5, '5'),
> (array('1', '2', '3', '4'), 5, null)
> as v1(col1,col2,col3);
> select array_insert(col1, col2, col3) from v1;
> {noformat}
> The above query throws a {{NullPointerException}}:
> {noformat}
> 23/02/10 11:08:05 ERROR SparkSQLDriver: Failed in [select array_insert(col1, 
> col2, col3) from v1]
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>   at 
> org.apache.spark.sql.execution.LocalTableScanExec.$anonfun$unsafeRows$1(LocalTableScanExec.scala:44)
> {noformat}
> {{array_append}} has the same issue:
> {noformat}
> spark-sql> select array_append(array(1, 2, 3, 4), cast(null as int));
> [1,2,3,4,0] <== should be [1,2,3,4,null]
> Time taken: 3.679 seconds, Fetched 1 row(s)
> spark-sql> select array_append(array('1', '2', '3', '4'), cast(null as 
> string));
> 23/02/10 11:13:36 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
> java.lang.NullPointerException
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.UnsafeWriter.write(UnsafeWriter.java:110)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown
>  Source)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42402) Support parameterized SQL by sql()

2023-02-10 Thread Takuya Ueshin (Jira)
Takuya Ueshin created SPARK-42402:
-

 Summary: Support parameterized SQL by sql()
 Key: SPARK-42402
 URL: https://issues.apache.org/jira/browse/SPARK-42402
 Project: Spark
  Issue Type: Sub-task
  Components: Connect
Affects Versions: 3.4.0
Reporter: Takuya Ueshin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42402) Support parameterized SQL by sql()

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687323#comment-17687323
 ] 

Apache Spark commented on SPARK-42402:
--

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/39971

> Support parameterized SQL by sql()
> --
>
> Key: SPARK-42402
> URL: https://issues.apache.org/jira/browse/SPARK-42402
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Takuya Ueshin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42402) Support parameterized SQL by sql()

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42402:


Assignee: Apache Spark

> Support parameterized SQL by sql()
> --
>
> Key: SPARK-42402
> URL: https://issues.apache.org/jira/browse/SPARK-42402
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Takuya Ueshin
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42402) Support parameterized SQL by sql()

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687324#comment-17687324
 ] 

Apache Spark commented on SPARK-42402:
--

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/39971

> Support parameterized SQL by sql()
> --
>
> Key: SPARK-42402
> URL: https://issues.apache.org/jira/browse/SPARK-42402
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Takuya Ueshin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42402) Support parameterized SQL by sql()

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42402:


Assignee: (was: Apache Spark)

> Support parameterized SQL by sql()
> --
>
> Key: SPARK-42402
> URL: https://issues.apache.org/jira/browse/SPARK-42402
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Takuya Ueshin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-42403:
-

 Summary: SHS fail to parse event logs
 Key: SPARK-42403
 URL: https://issues.apache.org/jira/browse/SPARK-42403
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.4.0
Reporter: Dongjoon Hyun


*Event Log*
{code}
{"Declaring 
Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
 Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
{code}

*Apache Spark 3.4*
{code}
3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
java.lang.IllegalArgumentException: requirement failed: Expected string, got 
NULL
at scala.Predef$.require(Predef.scala:281)
at 
org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
at 
org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at scala.collection.AbstractIterator.to(Iterator.scala:1431)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
at 
org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
at 
org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
at 
org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
at 
org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
at 
org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
at 
org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
at 
org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
at 
org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
at 
org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
at 
org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
at 
org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
at 
org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
at 
org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-42403:
--
Target Version/s: 3.4.0

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687335#comment-17687335
 ] 

Apache Spark commented on SPARK-42403:
--

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/39972

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42403:


Assignee: Apache Spark

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Apache Spark
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42403:


Assignee: (was: Apache Spark)

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687336#comment-17687336
 ] 

Apache Spark commented on SPARK-42403:
--

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/39972

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687339#comment-17687339
 ] 

Apache Spark commented on SPARK-42403:
--

User 'JoshRosen' has created a pull request for this issue:
https://github.com/apache/spark/pull/39973

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-42403:
-

Assignee: Josh Rosen

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Josh Rosen
>Priority: Blocker
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-42404) Spark driver pod should not create executor pods when there is no driver service

2023-02-10 Thread Shiqi Sun (Jira)
Shiqi Sun created SPARK-42404:
-

 Summary: Spark driver pod should not create executor pods when 
there is no driver service
 Key: SPARK-42404
 URL: https://issues.apache.org/jira/browse/SPARK-42404
 Project: Spark
  Issue Type: Improvement
  Components: Kubernetes
Affects Versions: 3.3.1
Reporter: Shiqi Sun


Currently, the driver pod assumes the driver headless service exists when 
creating the executor pods. However, when this assumption doesn't hold, the 
driver would still spin up executor pods, and the executor pods would fail, and 
then the driver would try to create more pods, and so on. With this, the spark 
job doesn't make any progress, while it eats a lot of computational resource, 
and it won't reach to a terminal state until manual intervention (e.g. deleting 
the job or recreate the driver service).

 

This Jira Issue is to address this problem, by having the driver check the 
driver service before creating the executor pods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-42222) Spark 3.3 Backport: SPARK-41344 Reading V2 datasource masks underlying error

2023-02-10 Thread L. C. Hsieh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

L. C. Hsieh resolved SPARK-4.
-
Resolution: Won't Fix

> Spark 3.3 Backport: SPARK-41344 Reading V2 datasource masks underlying error
> 
>
> Key: SPARK-4
> URL: https://issues.apache.org/jira/browse/SPARK-4
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Kevin Cheung
>Priority: Major
> Fix For: 3.3.2
>
>
> The underlying error message is thrown away, leading to a misleading/non-user 
> friendly error message.
> Full description in https://issues.apache.org/jira/browse/SPARK-41344. I will 
> backport this to Spark 3.3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-42403) SHS fail to parse event logs

2023-02-10 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-42403.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 39973
[https://github.com/apache/spark/pull/39973]

> SHS fail to parse event logs
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Josh Rosen
>Priority: Blocker
> Fix For: 3.4.0
>
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

[jira] [Updated] (SPARK-42403) JsonProtocol should handle null JSON strings

2023-02-10 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-42403:
--
Summary: JsonProtocol should handle null JSON strings  (was: SHS fail to 
parse event logs)

> JsonProtocol should handle null JSON strings
> 
>
> Key: SPARK-42403
> URL: https://issues.apache.org/jira/browse/SPARK-42403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Josh Rosen
>Priority: Blocker
> Fix For: 3.4.0
>
>
> *Event Log*
> {code}
> {"Declaring 
> Class":"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1","Method
>  Name":"columnartorow_nextBatch_0$","File Name":null,"Line Number":-1}
> {code}
> *Apache Spark 3.4*
> {code}
> 3/02/10 16:54:46 ERROR ReplayListenerBus: Exception parsing Spark event log: 
> file:/Users/dongjoon/data/history/eventlog_v2_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job/events_1_spark-1676069204164-1qq70hioosynfzib9rmi77wbavnao-driver-job.zstd
> java.lang.IllegalArgumentException: requirement failed: Expected string, got 
> NULL
> at scala.Predef$.require(Predef.scala:281)
> at 
> org.apache.spark.util.JsonProtocol$JsonNodeImplicits.extractString(JsonProtocol.scala:1614)
> at 
> org.apache.spark.util.JsonProtocol$.$anonfun$stackTraceFromJson$1(JsonProtocol.scala:1561)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
> at scala.collection.AbstractIterator.to(Iterator.scala:1431)
> at 
> scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
> at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
> at 
> scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
> at 
> org.apache.spark.util.JsonProtocol$.stackTraceFromJson(JsonProtocol.scala:1564)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndReasonFromJson(JsonProtocol.scala:1361)
> at 
> org.apache.spark.util.JsonProtocol$.taskEndFromJson(JsonProtocol.scala:938)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:876)
> at 
> org.apache.spark.util.JsonProtocol$.sparkEventFromJson(JsonProtocol.scala:865)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:88)
> at 
> org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:59)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3(FsHistoryProvider.scala:1140)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$3$adapted(FsHistoryProvider.scala:1138)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2777)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1(FsHistoryProvider.scala:1138)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$parseAppEventLogs$1$adapted(FsHistoryProvider.scala:1136)
> at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.parseAppEventLogs(FsHistoryProvider.scala:1136)
> at 
> org.apache.spark.deploy.history.FsHistoryProvider.rebuildAppStore(FsHistoryProvider.scala:1117)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org