[
https://issues.apache.org/jira/browse/SPARK-36494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-36494:
Description:
Create two tables in spark like this:
{code:java}
CREATE TABLE catalog_sales
(
[
https://issues.apache.org/jira/browse/SPARK-36494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-36494:
Description:
Create two tables in spark like this:
{code:java}
CREATE TABLE catalog_sales
(
zengrui created SPARK-36494:
---
Summary: SortMergeJoin do unnecessary shuffle for tables whose
provider is hive
Key: SPARK-36494
URL: https://issues.apache.org/jira/browse/SPARK-36494
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-27396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-27396:
Description:
*strong text**SPIP: Columnar Processing Without Arrow Formatting Guarantees.*
*Q1.* What
zengrui created SPARK-35893:
---
Summary: No Unit Test case for MySQLDialect.getCatalystType
Key: SPARK-35893
URL: https://issues.apache.org/jira/browse/SPARK-35893
Project: Spark
Issue Type: Test
[
https://issues.apache.org/jira/browse/SPARK-35892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-35892:
Description:
When use SQL to insert data in spark to database, suppose the original RDD's
partition num
zengrui created SPARK-35892:
---
Summary: numPartitions does not work when saves the RDD to database
Key: SPARK-35892
URL: https://issues.apache.org/jira/browse/SPARK-35892
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-35067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui resolved SPARK-35067.
-
Resolution: Won't Fix
> Test case for function COALESCE() does not cover the CodeGen support
>
zengrui created SPARK-35851:
---
Summary: Wrong variable used in function
GraphGenerators.sampleLogNormal
Key: SPARK-35851
URL: https://issues.apache.org/jira/browse/SPARK-35851
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-35067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-35067:
Description: The test case in org.apache.spark.sql.SQLQuerySuite named "Add
Parser of SQL COALESCE" does
[
https://issues.apache.org/jira/browse/SPARK-35067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-35067:
Summary: Test case for function COALESCE() does not cover the CodeGen
support (was: Test case for
zengrui created SPARK-35067:
---
Summary: Test case for function COALESCE() does not coverage the
CodeGen support
Key: SPARK-35067
URL: https://issues.apache.org/jira/browse/SPARK-35067
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-34760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-34760:
Description:
run JavaSparkSQLExample failed with Exception in runBasicDataSourceExample().
when excecute
[
https://issues.apache.org/jira/browse/SPARK-34760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-34760:
Summary: run JavaSQLDataSourceExample failed with Exception in
runBasicDataSourceExample(). (was: run
zengrui created SPARK-34760:
---
Summary: run JavaSparkSQLExample failed with Exception in
runBasicDataSourceExample().
Key: SPARK-34760
URL: https://issues.apache.org/jira/browse/SPARK-34760
Project: Spark
zengrui created SPARK-34759:
---
Summary: run JavaSparkSQLExample failed with Exception.
Key: SPARK-34759
URL: https://issues.apache.org/jira/browse/SPARK-34759
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-29799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-29799:
Attachment: 0001-add-implementation-for-issue-SPARK-29799.patch
> Split a kafka partition into multiple
[
https://issues.apache.org/jira/browse/SPARK-29791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-29791:
Attachment: 0001-add-implementation-for-issue-SPARK-29791.patch
> Add a spark config to allow user to use
zengrui created SPARK-29799:
---
Summary: Split a kafka partition into multiple KafkaRDD partitions
in the kafka external plugin for Spark Streaming
Key: SPARK-29799
URL: https://issues.apache.org/jira/browse/SPARK-29799
[
https://issues.apache.org/jira/browse/SPARK-29791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zengrui updated SPARK-29791:
Description: We can config the executor cores by "spark.executor.cores".
For example, if we config 8
zengrui created SPARK-29791:
---
Summary: Add a spark config to allow user to use executor cores
virtually.
Key: SPARK-29791
URL: https://issues.apache.org/jira/browse/SPARK-29791
Project: Spark
21 matches
Mail list logo