[jira] [Created] (SPARK-37950) Take EXTERNAL as a reserved table property

2022-01-17 Thread PengLei (Jira)
PengLei created SPARK-37950:
---

 Summary: Take EXTERNAL as a reserved table property
 Key: SPARK-37950
 URL: https://issues.apache.org/jira/browse/SPARK-37950
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei


At now. the {{EXTERNAL}} is not table reserved property. we should make 
{{EXTERNAL}} a truly reserved property 
[discuss|https://github.com/apache/spark/pull/35204#issuecomment-1014752053]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37931) Quote the column name if needed

2022-01-17 Thread PengLei (Jira)
PengLei created SPARK-37931:
---

 Summary: Quote the column name if needed
 Key: SPARK-37931
 URL: https://issues.apache.org/jira/browse/SPARK-37931
 Project: Spark
  Issue Type: Wish
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei


Quote the column name if need instead of quoted anyway.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37878) Migrate SHOW CREATE TABLE to use v2 command by default

2022-01-12 Thread PengLei (Jira)
PengLei created SPARK-37878:
---

 Summary: Migrate SHOW CREATE TABLE to use v2 command by default
 Key: SPARK-37878
 URL: https://issues.apache.org/jira/browse/SPARK-37878
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0


Migrate SHOW CREATE TABLE  to use v2 command by default



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37827) Put the some built-in table properties into V1Table.propertie to adapt to V2 command

2022-01-06 Thread PengLei (Jira)
PengLei created SPARK-37827:
---

 Summary: Put the some built-in table properties into 
V1Table.propertie to adapt to V2 command
 Key: SPARK-37827
 URL: https://issues.apache.org/jira/browse/SPARK-37827
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei


At now, we have no built-in table properties in V1Table.propertie, So we could 
not get the correct result for some V2 command to run for V1Table.

eg: `SHOW CREATE TABLE`(V2), we get the provider,location,comment,options from 
the table.properties. but now we have nothing in table.proerties.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37818) Add option for show create table command

2022-01-05 Thread PengLei (Jira)
PengLei created SPARK-37818:
---

 Summary: Add option for show create table command
 Key: SPARK-37818
 URL: https://issues.apache.org/jira/browse/SPARK-37818
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37517) Keep consistent order of columns with user specify for v1 table

2021-12-02 Thread PengLei (Jira)
PengLei created SPARK-37517:
---

 Summary: Keep consistent order of columns with user specify for v1 
table
 Key: SPARK-37517
 URL: https://issues.apache.org/jira/browse/SPARK-37517
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-37501) CREATE/REPLACE TABLE should qualify location for v2 command

2021-11-30 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-37501:

Summary: CREATE/REPLACE TABLE should qualify location for v2 command  (was: 
CREATE TABLE should qualify location for v2 command)

> CREATE/REPLACE TABLE should qualify location for v2 command
> ---
>
> Key: SPARK-37501
> URL: https://issues.apache.org/jira/browse/SPARK-37501
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37501) CREATE TABLE should qualify location for v2 command

2021-11-30 Thread PengLei (Jira)
PengLei created SPARK-37501:
---

 Summary: CREATE TABLE should qualify location for v2 command
 Key: SPARK-37501
 URL: https://issues.apache.org/jira/browse/SPARK-37501
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37381) Unify v1 and v2 SHOW CREATE TABLE tests

2021-11-30 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450936#comment-17450936
 ] 

PengLei commented on SPARK-37381:
-

[~dchvn] Sorry, Late reply. I think you can other DDL command. 

> Unify v1 and v2 SHOW CREATE TABLE  tests
> 
>
> Key: SPARK-37381
> URL: https://issues.apache.org/jira/browse/SPARK-37381
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-37494) Unify v1 and v2 options output of `SHOW CREATE TABLE` command

2021-11-29 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-37494:

Summary: Unify v1 and v2 options output of `SHOW CREATE TABLE` command  
(was: Unify v1 and v2 option output of `SHOW CREATE TABLE` command)

> Unify v1 and v2 options output of `SHOW CREATE TABLE` command
> -
>
> Key: SPARK-37494
> URL: https://issues.apache.org/jira/browse/SPARK-37494
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37494) Unify v1 and v2 option output of `SHOW CREATE TABLE` command

2021-11-29 Thread PengLei (Jira)
PengLei created SPARK-37494:
---

 Summary: Unify v1 and v2 option output of `SHOW CREATE TABLE` 
command
 Key: SPARK-37494
 URL: https://issues.apache.org/jira/browse/SPARK-37494
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37477) Migrate SHOW CREATE TABLE to use V2 command by default

2021-11-28 Thread PengLei (Jira)
PengLei created SPARK-37477:
---

 Summary: Migrate SHOW CREATE TABLE to use V2 command by default
 Key: SPARK-37477
 URL: https://issues.apache.org/jira/browse/SPARK-37477
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37381) Unify v1 and v2 SHOW CREATE TABLE tests

2021-11-18 Thread PengLei (Jira)
PengLei created SPARK-37381:
---

 Summary: Unify v1 and v2 SHOW CREATE TABLE  tests
 Key: SPARK-37381
 URL: https://issues.apache.org/jira/browse/SPARK-37381
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37195) Unify v1 and v2 SHOW TBLPROPERTIES tests

2021-11-02 Thread PengLei (Jira)
PengLei created SPARK-37195:
---

 Summary: Unify v1 and v2 SHOW TBLPROPERTIES  tests
 Key: SPARK-37195
 URL: https://issues.apache.org/jira/browse/SPARK-37195
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0


Unify v1 and v2 SHOW TBLPROPERTIES tests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36924) CAST between ANSI intervals and numerics

2021-11-02 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437198#comment-17437198
 ] 

PengLei commented on SPARK-36924:
-

woking on this later

> CAST between ANSI intervals and numerics
> 
>
> Key: SPARK-36924
> URL: https://issues.apache.org/jira/browse/SPARK-36924
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Max Gekk
>Priority: Major
>
> Support casting between ANSI intervals and numerics. The implementation 
> should follow ANSI SQL standard. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37192) Migrate SHOW TBLPROPERTIES to use V2 command by default

2021-11-01 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437113#comment-17437113
 ] 

PengLei commented on SPARK-37192:
-

[~imback82] [~wenchen] I want to try to fix it, okay?

> Migrate SHOW TBLPROPERTIES to use V2 command by default
> ---
>
> Key: SPARK-37192
> URL: https://issues.apache.org/jira/browse/SPARK-37192
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.3.0
>
>
> Migrate SHOW TBLPROPERTIES to use V2 command by default



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37192) Migrate SHOW TBLPROPERTIES to use V2 command by default

2021-11-01 Thread PengLei (Jira)
PengLei created SPARK-37192:
---

 Summary: Migrate SHOW TBLPROPERTIES to use V2 command by default
 Key: SPARK-37192
 URL: https://issues.apache.org/jira/browse/SPARK-37192
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0


Migrate SHOW TBLPROPERTIES to use V2 command by default



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37161) RowToColumnConverter support AnsiIntervalType

2021-10-29 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435880#comment-17435880
 ] 

PengLei commented on SPARK-37161:
-

working on this

> RowToColumnConverter  support AnsiIntervalType
> --
>
> Key: SPARK-37161
> URL: https://issues.apache.org/jira/browse/SPARK-37161
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
>
> currently, we have RowToColumnConverter for all data types except 
> AnsiIntervalType
> {code:java}
> // code placeholder
> val core = dataType match {
>   case BinaryType => BinaryConverter
>   case BooleanType => BooleanConverter
>   case ByteType => ByteConverter
>   case ShortType => ShortConverter
>   case IntegerType | DateType => IntConverter
>   case FloatType => FloatConverter
>   case LongType | TimestampType => LongConverter
>   case DoubleType => DoubleConverter
>   case StringType => StringConverter
>   case CalendarIntervalType => CalendarConverter
>   case at: ArrayType => ArrayConverter(getConverterForType(at.elementType, 
> at.containsNull))
>   case st: StructType => new StructConverter(st.fields.map(
> (f) => getConverterForType(f.dataType, f.nullable)))
>   case dt: DecimalType => new DecimalConverter(dt)
>   case mt: MapType => MapConverter(getConverterForType(mt.keyType, nullable = 
> false),
> getConverterForType(mt.valueType, mt.valueContainsNull))
>   case unknown => throw 
> QueryExecutionErrors.unsupportedDataTypeError(unknown.toString)
> }
> if (nullable) {
>   dataType match {
> case CalendarIntervalType => new StructNullableTypeConverter(core)
> case st: StructType => new StructNullableTypeConverter(core)
> case _ => new BasicNullableTypeConverter(core)
>   }
> } else {
>   core
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37161) RowToColumnConverter support AnsiIntervalType

2021-10-29 Thread PengLei (Jira)
PengLei created SPARK-37161:
---

 Summary: RowToColumnConverter  support AnsiIntervalType
 Key: SPARK-37161
 URL: https://issues.apache.org/jira/browse/SPARK-37161
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei


currently, we have RowToColumnConverter for all data types except 
AnsiIntervalType
{code:java}
// code placeholder
val core = dataType match {
  case BinaryType => BinaryConverter
  case BooleanType => BooleanConverter
  case ByteType => ByteConverter
  case ShortType => ShortConverter
  case IntegerType | DateType => IntConverter
  case FloatType => FloatConverter
  case LongType | TimestampType => LongConverter
  case DoubleType => DoubleConverter
  case StringType => StringConverter
  case CalendarIntervalType => CalendarConverter
  case at: ArrayType => ArrayConverter(getConverterForType(at.elementType, 
at.containsNull))
  case st: StructType => new StructConverter(st.fields.map(
(f) => getConverterForType(f.dataType, f.nullable)))
  case dt: DecimalType => new DecimalConverter(dt)
  case mt: MapType => MapConverter(getConverterForType(mt.keyType, nullable = 
false),
getConverterForType(mt.valueType, mt.valueContainsNull))
  case unknown => throw 
QueryExecutionErrors.unsupportedDataTypeError(unknown.toString)
}

if (nullable) {
  dataType match {
case CalendarIntervalType => new StructNullableTypeConverter(core)
case st: StructType => new StructNullableTypeConverter(core)
case _ => new BasicNullableTypeConverter(core)
  }
} else {
  core
}

{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36928) Handle ANSI intervals in ColumnarRow, ColumnarBatchRow and ColumnarArray

2021-10-17 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17429791#comment-17429791
 ] 

PengLei commented on SPARK-36928:
-

working on this later

> Handle ANSI intervals in ColumnarRow, ColumnarBatchRow and ColumnarArray
> 
>
> Key: SPARK-36928
> URL: https://issues.apache.org/jira/browse/SPARK-36928
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Max Gekk
>Priority: Major
>
> Handle ANSI interval types - YearMonthIntervalType and DayTimeIntervalType in 
> Columnar* classes, and write tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36921) The DIV function should support ANSI intervals

2021-10-08 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426470#comment-17426470
 ] 

PengLei commented on SPARK-36921:
-

working on this

> The DIV function should support ANSI intervals
> --
>
> Key: SPARK-36921
> URL: https://issues.apache.org/jira/browse/SPARK-36921
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Max Gekk
>Priority: Major
>
> Extended the div function to support ANSI intervals. The operation should 
> produce quotient of division.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36922) The SIGN/SIGNUM functions should support ANSI intervals

2021-10-08 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17426471#comment-17426471
 ] 

PengLei commented on SPARK-36922:
-

working on this

> The SIGN/SIGNUM functions should support ANSI intervals
> ---
>
> Key: SPARK-36922
> URL: https://issues.apache.org/jira/browse/SPARK-36922
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Max Gekk
>Priority: Major
>
> Extend the *sign/signum* functions to support ANSI intervals.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36841) Provide ansi syntax `set catalog xxx` to change the current catalog

2021-09-24 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36841:

Description: !SET-CATALOG.PNG!  (was: !截图.PNG!)

> Provide ansi syntax  `set catalog xxx` to change the current catalog  
> --
>
> Key: SPARK-36841
> URL: https://issues.apache.org/jira/browse/SPARK-36841
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.3.0
>
>
> !SET-CATALOG.PNG!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36841) Provide ansi syntax `set catalog xxx` to change the current catalog

2021-09-24 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36841:

Description: !截图.PNG!

> Provide ansi syntax  `set catalog xxx` to change the current catalog  
> --
>
> Key: SPARK-36841
> URL: https://issues.apache.org/jira/browse/SPARK-36841
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.3.0
>
>
> !截图.PNG!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36841) Provide ansi syntax `set catalog xxx` to change the current catalog

2021-09-24 Thread PengLei (Jira)
PengLei created SPARK-36841:
---

 Summary: Provide ansi syntax  `set catalog xxx` to change the 
current catalog  
 Key: SPARK-36841
 URL: https://issues.apache.org/jira/browse/SPARK-36841
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36790) Update user-facing catalog to adapt CatalogPlugin

2021-09-17 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36790:

Description: 
At now the SparkSession.catalog always retuan a CatalogImpl with a 
SessionCatalog that is SparkSession.sessionState.catalog
{code:java}
@transient lazy val catalog: Catalog = new CatalogImpl(self)
{code}
{code:java}
private def sessionCatalog: SessionCatalog = sparkSession.sessionState.catalog
{code}
So we can do the action is just based the SessionCatalog, we could not do 
action based user-defined CatalogPlugin.

> Update user-facing catalog to adapt CatalogPlugin
> -
>
> Key: SPARK-36790
> URL: https://issues.apache.org/jira/browse/SPARK-36790
> Project: Spark
>  Issue Type: Task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: PengLei
>Priority: Minor
> Fix For: 3.3.0
>
>
> At now the SparkSession.catalog always retuan a CatalogImpl with a 
> SessionCatalog that is SparkSession.sessionState.catalog
> {code:java}
> @transient lazy val catalog: Catalog = new CatalogImpl(self)
> {code}
> {code:java}
> private def sessionCatalog: SessionCatalog = sparkSession.sessionState.catalog
> {code}
> So we can do the action is just based the SessionCatalog, we could not do 
> action based user-defined CatalogPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36790) Update user-facing catalog to adapt CatalogPlugin

2021-09-17 Thread PengLei (Jira)
PengLei created SPARK-36790:
---

 Summary: Update user-facing catalog to adapt CatalogPlugin
 Key: SPARK-36790
 URL: https://issues.apache.org/jira/browse/SPARK-36790
 Project: Spark
  Issue Type: Task
  Components: SQL
Affects Versions: 3.3.0
Reporter: PengLei
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36381) ALTER TABLE ADD/RENAME COLUMNS check exist does not use case sensitive for v2 command.

2021-08-02 Thread PengLei (Jira)
PengLei created SPARK-36381:
---

 Summary: ALTER TABLE ADD/RENAME COLUMNS check exist does not use 
case sensitive for v2 command.
 Key: SPARK-36381
 URL: https://issues.apache.org/jira/browse/SPARK-36381
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei


ALTER TABLE ADD/RENAME COLUMNS check exist does not use case sensitive for v2 
command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36306) Refactor seventeenth set of 20 query execution errors to use error classes

2021-08-02 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391362#comment-17391362
 ] 

PengLei commented on SPARK-36306:
-

working on this

> Refactor seventeenth set of 20 query execution errors to use error classes
> --
>
> Key: SPARK-36306
> URL: https://issues.apache.org/jira/browse/SPARK-36306
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the seventeenth set of 20.
> {code:java}
> legacyCheckpointDirectoryExistsError
> subprocessExitedError
> outputDataTypeUnsupportedByNodeWithoutSerdeError
> invalidStartIndexError
> concurrentModificationOnExternalAppendOnlyUnsafeRowArrayError
> doExecuteBroadcastNotImplementedError
> databaseNameConflictWithSystemPreservedDatabaseError
> commentOnTableUnsupportedError
> unsupportedUpdateColumnNullabilityError
> renameColumnUnsupportedForOlderMySQLError
> failedToExecuteQueryError
> nestedFieldUnsupportedError
> transformationsAndActionsNotInvokedByDriverError
> repeatedPivotsUnsupportedError
> pivotNotAfterGroupByUnsupportedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36305) Refactor sixteenth set of 20 query execution errors to use error classes

2021-08-02 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391361#comment-17391361
 ] 

PengLei commented on SPARK-36305:
-

working on this

> Refactor sixteenth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36305
> URL: https://issues.apache.org/jira/browse/SPARK-36305
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the sixteenth set of 20.
> {code:java}
> cannotDropMultiPartitionsOnNonatomicPartitionTableError
> truncateMultiPartitionUnsupportedError
> overwriteTableByUnsupportedExpressionError
> dynamicPartitionOverwriteUnsupportedByTableError
> failedMergingSchemaError
> cannotBroadcastTableOverMaxTableRowsError
> cannotBroadcastTableOverMaxTableBytesError
> notEnoughMemoryToBuildAndBroadcastTableError
> executeCodePathUnsupportedError
> cannotMergeClassWithOtherClassError
> continuousProcessingUnsupportedByDataSourceError
> failedToReadDataError
> failedToGenerateEpochMarkerError
> foreachWriterAbortedDueToTaskFailureError
> integerOverflowError
> failedToReadDeltaFileError
> failedToReadSnapshotFileError
> cannotPurgeAsBreakInternalStateError
> cleanUpSourceFilesUnsupportedError
> latestOffsetNotCalledError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36304) Refactor fifteenth set of 20 query execution errors to use error classes

2021-08-02 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391360#comment-17391360
 ] 

PengLei commented on SPARK-36304:
-

woking on this

> Refactor fifteenth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36304
> URL: https://issues.apache.org/jira/browse/SPARK-36304
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the fifteenth set of 20.
> {code:java}
> unsupportedOperationExceptionError
> nullLiteralsCannotBeCastedError
> notUserDefinedTypeError
> cannotLoadUserDefinedTypeError
> timeZoneIdNotSpecifiedForTimestampTypeError
> notPublicClassError
> primitiveTypesNotSupportedError
> fieldIndexOnRowWithoutSchemaError
> valueIsNullError
> onlySupportDataSourcesProvidingFileFormatError
> failToSetOriginalPermissionBackError
> failToSetOriginalACLBackError
> multiFailuresInStageMaterializationError
> unrecognizedCompressionSchemaTypeIDError
> getParentLoggerNotImplementedError
> cannotCreateParquetConverterForTypeError
> cannotCreateParquetConverterForDecimalTypeError
> cannotCreateParquetConverterForDataTypeError
> cannotAddMultiPartitionsOnNonatomicPartitionTableError
> userSpecifiedSchemaUnsupportedByDataSourceError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36336) Define the new exception that mix SparkThrowable for all base exe in QueryExecutionErrors

2021-07-28 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17389236#comment-17389236
 ] 

PengLei commented on SPARK-36336:
-

I am woking on this

> Define the new exception that mix SparkThrowable for all base exe in 
> QueryExecutionErrors
> -
>
> Key: SPARK-36336
> URL: https://issues.apache.org/jira/browse/SPARK-36336
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> The Throwable should extend 
> [SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
>  see 
> [SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
>  as an example of how to mix SparkThrowable into a base Exception type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-28 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17389235#comment-17389235
 ] 

PengLei commented on SPARK-36094:
-

[~karenfeng] Before refactor all query execution, I want to define all the base 
exceptions that used by QueryExecutionErrors in a separate subtask. What do you 
think? [Define the new exception that mix SparkThrowable for all base exe in 
QueryExecutionErrors|https://issues.apache.org/jira/browse/SPARK-36336]

> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first.
>  As a starting point, we can build off the exception grouping done in 
> SPARK-33539. In total, there are ~1000 error messages to group split across 
> three files (QueryCompilationErrors, QueryExecutionErrors, and 
> QueryParsingErrors). In this ticket, each of these files is split into chunks 
> of ~20 errors for refactoring.
> Here is an example PR that groups a few error messages in the 
> QueryCompilationErrors class: [PR 
> 33309|https://github.com/apache/spark/pull/33309].
> [Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:
>  - Error classes should be unique and sorted in alphabetical order.
>  - Error classes should be unified as much as possible to improve auditing. 
> If error messages are similar, group them into a single error class and add 
> parameters to the error message.
>  - SQLSTATE should match the ANSI/ISO standard, without introducing new 
> classes or subclasses. See the error 
> [guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md];
>  if none of them match, the SQLSTATE field should be empty.
>  - The Throwable should extend 
> [SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
>  see 
> [SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
>  as an example of how to mix SparkThrowable into a base Exception type.
> We will improve error message quality as a follow-up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36336) Define the new exception that mix SparkThrowable for all base exe in QueryExecutionErrors

2021-07-28 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36336:

Description: The Throwable should extend 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
 see 
[SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
 as an example of how to mix SparkThrowable into a base Exception type.

> Define the new exception that mix SparkThrowable for all base exe in 
> QueryExecutionErrors
> -
>
> Key: SPARK-36336
> URL: https://issues.apache.org/jira/browse/SPARK-36336
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> The Throwable should extend 
> [SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
>  see 
> [SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
>  as an example of how to mix SparkThrowable into a base Exception type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36336) Define the new exception that mix SparkThrowable for all base exe in QueryExecutionErrors

2021-07-28 Thread PengLei (Jira)
PengLei created SPARK-36336:
---

 Summary: Define the new exception that mix SparkThrowable for all 
base exe in QueryExecutionErrors
 Key: SPARK-36336
 URL: https://issues.apache.org/jira/browse/SPARK-36336
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36295) Refactor sixth set of 20 query execution errors to use error classes

2021-07-27 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387979#comment-17387979
 ] 

PengLei commented on SPARK-36295:
-

woking on this

> Refactor sixth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36295
> URL: https://issues.apache.org/jira/browse/SPARK-36295
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the sixth set of 20.
> {code:java}
> noRecordsFromEmptyDataReaderError
> fileNotFoundError
> unsupportedSchemaColumnConvertError
> cannotReadParquetFilesError
> cannotCreateColumnarReaderError
> invalidNamespaceNameError
> unsupportedPartitionTransformError
> missingDatabaseLocationError
> cannotRemoveReservedPropertyError
> namespaceNotEmptyError
> writingJobFailedError
> writingJobAbortedError
> commitDeniedError
> unsupportedTableWritesError
> cannotCreateJDBCTableWithPartitionsError
> unsupportedUserSpecifiedSchemaError
> writeUnsupportedForBinaryFileDataSourceError
> fileLengthExceedsMaxLengthError
> unsupportedFieldNameError
> cannotSpecifyBothJdbcTableNameAndQueryError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36294) Refactor fifth set of 20 query execution errors to use error classes

2021-07-27 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387978#comment-17387978
 ] 

PengLei commented on SPARK-36294:
-

working on this

> Refactor fifth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36294
> URL: https://issues.apache.org/jira/browse/SPARK-36294
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the fifth set of 20.
> {code:java}
> createStreamingSourceNotSpecifySchemaError
> streamedOperatorUnsupportedByDataSourceError
> multiplePathsSpecifiedError
> failedToFindDataSourceError
> removedClassInSpark2Error
> incompatibleDataSourceRegisterError
> unrecognizedFileFormatError
> sparkUpgradeInReadingDatesError
> sparkUpgradeInWritingDatesError
> buildReaderUnsupportedForFileFormatError
> jobAbortedError
> taskFailedWhileWritingRowsError
> readCurrentFileNotFoundError
> unsupportedSaveModeError
> cannotClearOutputDirectoryError
> cannotClearPartitionDirectoryError
> failedToCastValueToDataTypeForPartitionColumnError
> endOfStreamError
> fallbackV1RelationReportsInconsistentSchemaError
> cannotDropNonemptyNamespaceError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-36291) Refactor second set of 20 query execution errors to use error classes

2021-07-27 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36291:

Comment: was deleted

(was: working on this)

> Refactor second set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36291
> URL: https://issues.apache.org/jira/browse/SPARK-36291
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second set of 20.
> {code:java}
> inputTypeUnsupportedError
> invalidFractionOfSecondError
> overflowInSumOfDecimalError
> overflowInIntegralDivideError
> mapSizeExceedArraySizeWhenZipMapError
> copyNullFieldNotAllowedError
> literalTypeUnsupportedError
> noDefaultForDataTypeError
> doGenCodeOfAliasShouldNotBeCalledError
> orderedOperationUnsupportedByDataTypeError
> regexGroupIndexLessThanZeroError
> regexGroupIndexExceedGroupCountError
> invalidUrlError
> dataTypeOperationUnsupportedError
> mergeUnsupportedByWindowFunctionError
> dataTypeUnexpectedError
> typeUnsupportedError
> negativeValueUnexpectedError
> addNewFunctionMismatchedWithFunctionError
> cannotGenerateCodeForUncomparableTypeError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36291) Refactor second set of 20 query execution errors to use error classes

2021-07-27 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387955#comment-17387955
 ] 

PengLei commented on SPARK-36291:
-

working on this

> Refactor second set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36291
> URL: https://issues.apache.org/jira/browse/SPARK-36291
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second set of 20.
> {code:java}
> inputTypeUnsupportedError
> invalidFractionOfSecondError
> overflowInSumOfDecimalError
> overflowInIntegralDivideError
> mapSizeExceedArraySizeWhenZipMapError
> copyNullFieldNotAllowedError
> literalTypeUnsupportedError
> noDefaultForDataTypeError
> doGenCodeOfAliasShouldNotBeCalledError
> orderedOperationUnsupportedByDataTypeError
> regexGroupIndexLessThanZeroError
> regexGroupIndexExceedGroupCountError
> invalidUrlError
> dataTypeOperationUnsupportedError
> mergeUnsupportedByWindowFunctionError
> dataTypeUnexpectedError
> typeUnsupportedError
> negativeValueUnexpectedError
> addNewFunctionMismatchedWithFunctionError
> cannotGenerateCodeForUncomparableTypeError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36107) Refactor first set of 20 query execution errors to use error classes

2021-07-26 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387715#comment-17387715
 ] 

PengLei commented on SPARK-36107:
-

woking on this

> Refactor first set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second set of 20.
> {code:java}
> columnChangeUnsupportedError
> logicalHintOperatorNotRemovedDuringAnalysisError
> cannotEvaluateExpressionError
> cannotGenerateCodeForExpressionError
> cannotTerminateGeneratorError
> castingCauseOverflowError
> cannotChangeDecimalPrecisionError
> invalidInputSyntaxForNumericError
> cannotCastFromNullTypeError
> cannotCastError
> cannotParseDecimalError
> simpleStringWithNodeIdUnsupportedError
> evaluateUnevaluableAggregateUnsupportedError
> dataTypeUnsupportedError
> dataTypeUnsupportedError
> failedExecuteUserDefinedFunctionError
> divideByZeroError
> invalidArrayIndexError
> mapKeyNotExistError
> rowFromCSVParserNotExpectedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36133) The catalog name keep consistent with the namespace naming rule

2021-07-13 Thread PengLei (Jira)
PengLei created SPARK-36133:
---

 Summary: The catalog name keep consistent with the namespace 
naming rule
 Key: SPARK-36133
 URL: https://issues.apache.org/jira/browse/SPARK-36133
 Project: Spark
  Issue Type: Wish
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36012) Lost the null flag info when show create table

2021-07-06 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36012:

Summary: Lost the null flag info when show create table  (was: Lost the 
null flag info when show create table in v2)

> Lost the null flag info when show create table
> --
>
> Key: SPARK-36012
> URL: https://issues.apache.org/jira/browse/SPARK-36012
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.
> {code:java}
> // def toDDL: String = s"${quoteIdentifier(name)} 
> ${dataType.sql}$getDDLComment"
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36012) Lost the null flag info when show create table in v2

2021-07-05 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36012:

Summary: Lost the null flag info when show create table in v2  (was: Lost 
the null flag info when show create table in both v1 and v2)

> Lost the null flag info when show create table in v2
> 
>
> Key: SPARK-36012
> URL: https://issues.apache.org/jira/browse/SPARK-36012
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.
> {code:java}
> // def toDDL: String = s"${quoteIdentifier(name)} 
> ${dataType.sql}$getDDLComment"
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36012) Lost the null flag info when show create table in both v1 and v2

2021-07-04 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374538#comment-17374538
 ] 

PengLei commented on SPARK-36012:
-

working on this

> Lost the null flag info when show create table in both v1 and v2
> 
>
> Key: SPARK-36012
> URL: https://issues.apache.org/jira/browse/SPARK-36012
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.
> {code:java}
> // def toDDL: String = s"${quoteIdentifier(name)} 
> ${dataType.sql}$getDDLComment"
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36012) Lost the null flag info when show create table in both v1 and v2

2021-07-04 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36012:

Description: 
When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.
{code:java}
// def toDDL: String = s"${quoteIdentifier(name)} ${dataType.sql}$getDDLComment"
{code}
 

  was:
When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.

```scala

def toDDL: String = s"${quoteIdentifier(name)} ${dataType.sql}$getDDLComment"

```


> Lost the null flag info when show create table in both v1 and v2
> 
>
> Key: SPARK-36012
> URL: https://issues.apache.org/jira/browse/SPARK-36012
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.
> {code:java}
> // def toDDL: String = s"${quoteIdentifier(name)} 
> ${dataType.sql}$getDDLComment"
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36012) Lost the null flag info when show create table in both v1 and v2

2021-07-04 Thread PengLei (Jira)
PengLei created SPARK-36012:
---

 Summary: Lost the null flag info when show create table in both v1 
and v2
 Key: SPARK-36012
 URL: https://issues.apache.org/jira/browse/SPARK-36012
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei
 Fix For: 3.2.0


When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.

```

def toDDL: String = s"${quoteIdentifier(name)} ${dataType.sql}$getDDLComment"

```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36012) Lost the null flag info when show create table in both v1 and v2

2021-07-04 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-36012:

Description: 
When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.

```scala

def toDDL: String = s"${quoteIdentifier(name)} ${dataType.sql}$getDDLComment"

```

  was:
When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.

```

def toDDL: String = s"${quoteIdentifier(name)} ${dataType.sql}$getDDLComment"

```


> Lost the null flag info when show create table in both v1 and v2
> 
>
> Key: SPARK-36012
> URL: https://issues.apache.org/jira/browse/SPARK-36012
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> When execute `SHOW CREATE TALBE XXX` command, the ddl info lost the null flag.
> ```scala
> def toDDL: String = s"${quoteIdentifier(name)} ${dataType.sql}$getDDLComment"
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35973) DataSourceV2: Support SHOW CATALOGS

2021-07-01 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-35973:

Description: Datasource V2 can support multiple catalogs. Having "SHOW 
CATALOGS" to list the catalogs and corresponding default-namespace info will be 
useful.  (was: Datasource V2 can support multiple catalogs. Having "SHOW 
CATALOGS" to list the catalogs/default-namespace info will be useful.)

> DataSourceV2: Support SHOW CATALOGS
> ---
>
> Key: SPARK-35973
> URL: https://issues.apache.org/jira/browse/SPARK-35973
> Project: Spark
>  Issue Type: Task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> Datasource V2 can support multiple catalogs. Having "SHOW CATALOGS" to list 
> the catalogs and corresponding default-namespace info will be useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35973) DataSourceV2: Support SHOW CATALOGS

2021-07-01 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372797#comment-17372797
 ] 

PengLei commented on SPARK-35973:
-

I am woking on this. After 3.2 released

> DataSourceV2: Support SHOW CATALOGS
> ---
>
> Key: SPARK-35973
> URL: https://issues.apache.org/jira/browse/SPARK-35973
> Project: Spark
>  Issue Type: Task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> Datasource V2 can support multiple catalogs. Having "SHOW CATALOGS" to list 
> the catalogs/default-namespace info will be useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35973) DataSourceV2: Support SHOW CATALOGS

2021-07-01 Thread PengLei (Jira)
PengLei created SPARK-35973:
---

 Summary: DataSourceV2: Support SHOW CATALOGS
 Key: SPARK-35973
 URL: https://issues.apache.org/jira/browse/SPARK-35973
 Project: Spark
  Issue Type: Task
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei


Datasource V2 can support multiple catalogs. Having "SHOW CATALOGS" to list the 
catalogs/default-namespace info will be useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35925) Support DayTimeIntervalType in width-bucket function

2021-06-29 Thread PengLei (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengLei updated SPARK-35925:

Fix Version/s: (was: 3.2.0)

> Support DayTimeIntervalType in width-bucket function
> 
>
> Key: SPARK-35925
> URL: https://issues.apache.org/jira/browse/SPARK-35925
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> At now, width-bucket support the type [DoubleType, DoubleType, DoubleType, 
> LongType],
> we hope that support[DayTimeIntervaType, DayTimeIntervaType, 
> DayTimeIntervaType, LongType]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35926) Support YearMonthIntervalType in width-bucket function

2021-06-29 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371756#comment-17371756
 ] 

PengLei commented on SPARK-35926:
-

[~code_kr_dev_s] Sorry, I will do it after 3.2 release.

> Support YearMonthIntervalType in width-bucket function
> --
>
> Key: SPARK-35926
> URL: https://issues.apache.org/jira/browse/SPARK-35926
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> At now, width-bucket support the type [DoubleType, DoubleType, DoubleType, 
> LongType],
> we hope that support[YearMonthIntervalType, YearMonthIntervalType, 
> YearMonthIntervalType, LongType]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35926) Support YearMonthIntervalType in width-bucket function

2021-06-29 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371167#comment-17371167
 ] 

PengLei commented on SPARK-35926:
-

I'm going to do this.

> Support YearMonthIntervalType in width-bucket function
> --
>
> Key: SPARK-35926
> URL: https://issues.apache.org/jira/browse/SPARK-35926
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
>
> At now, width-bucket support the type [DoubleType, DoubleType, DoubleType, 
> LongType],
> we hope that support[YearMonthIntervalType, YearMonthIntervalType, 
> YearMonthIntervalType, LongType]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35926) Support YearMonthIntervalType in width-bucket function

2021-06-29 Thread PengLei (Jira)
PengLei created SPARK-35926:
---

 Summary: Support YearMonthIntervalType in width-bucket function
 Key: SPARK-35926
 URL: https://issues.apache.org/jira/browse/SPARK-35926
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei


At now, width-bucket support the type [DoubleType, DoubleType, DoubleType, 
LongType],

we hope that support[YearMonthIntervalType, YearMonthIntervalType, 
YearMonthIntervalType, LongType]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35925) Support DayTimeIntervalType in width-bucket function

2021-06-29 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371163#comment-17371163
 ] 

PengLei commented on SPARK-35925:
-

[~maxgekk]  Do you think it's necessary? I'm going to do this.

> Support DayTimeIntervalType in width-bucket function
> 
>
> Key: SPARK-35925
> URL: https://issues.apache.org/jira/browse/SPARK-35925
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> At now, width-bucket support the type [DoubleType, DoubleType, DoubleType, 
> LongType],
> we hope that support[DayTimeIntervaType, DayTimeIntervaType, 
> DayTimeIntervaType, LongType]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35925) Support DayTimeIntervalType in width-bucket function

2021-06-29 Thread PengLei (Jira)
PengLei created SPARK-35925:
---

 Summary: Support DayTimeIntervalType in width-bucket function
 Key: SPARK-35925
 URL: https://issues.apache.org/jira/browse/SPARK-35925
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei
 Fix For: 3.2.0


At now, width-bucket support the type [DoubleType, DoubleType, DoubleType, 
LongType],

we hope that support[DayTimeIntervaType, DayTimeIntervaType, 
DayTimeIntervaType, LongType]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35778) Check multiply/divide of year-month intervals of any fields by numeric

2021-06-23 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368636#comment-17368636
 ] 

PengLei commented on SPARK-35778:
-

[~angerszhuuu] May I continue to finish this question?

> Check multiply/divide of year-month intervals of any fields by numeric
> --
>
> Key: SPARK-35778
> URL: https://issues.apache.org/jira/browse/SPARK-35778
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Write tests that checks multiply/divide of the following intervals by numeric:
> # INTERVAL YEAR
> # INTERVAL YEAR TO MONTH
> # INTERVAL MONTH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35852) Improve the implementation for DateType +/- DayTimeIntervalType(DAY)

2021-06-21 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17366990#comment-17366990
 ] 

PengLei commented on SPARK-35852:
-

I am working on this

> Improve the implementation for DateType +/- DayTimeIntervalType(DAY)
> 
>
> Key: SPARK-35852
> URL: https://issues.apache.org/jira/browse/SPARK-35852
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: PengLei
>Priority: Major
> Fix For: 3.2.0
>
>
> At now, `DateType +/- DayTimeIntervalType()` will convert the DateType to 
> TimestampType, then TimeAdd. When interval type is  DayTimeIntervalType(DAY), 
> it can use DateAdd instead of TimeAdd.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35852) Improve the implementation for DateType +/- DayTimeIntervalType(DAY)

2021-06-21 Thread PengLei (Jira)
PengLei created SPARK-35852:
---

 Summary: Improve the implementation for DateType +/- 
DayTimeIntervalType(DAY)
 Key: SPARK-35852
 URL: https://issues.apache.org/jira/browse/SPARK-35852
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: PengLei
 Fix For: 3.2.0


At now, `DateType +/- DayTimeIntervalType()` will convert the DateType to 
TimestampType, then TimeAdd. When interval type is  DayTimeIntervalType(DAY), 
it can use DateAdd instead of TimeAdd.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35778) Check multiply/divide of year-month intervals of any fields by numeric

2021-06-17 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364749#comment-17364749
 ] 

PengLei commented on SPARK-35778:
-

working on this

> Check multiply/divide of year-month intervals of any fields by numeric
> --
>
> Key: SPARK-35778
> URL: https://issues.apache.org/jira/browse/SPARK-35778
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Write tests that checks multiply/divide of the following intervals by numeric:
> # INTERVAL YEAR
> # INTERVAL YEAR TO MONTH
> # INTERVAL MONTH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35728) Check multiply/divide of day-time intervals of any fields by numeric

2021-06-17 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364710#comment-17364710
 ] 

PengLei commented on SPARK-35728:
-

working on this

> Check multiply/divide of day-time intervals of any fields by numeric
> 
>
> Key: SPARK-35728
> URL: https://issues.apache.org/jira/browse/SPARK-35728
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Write tests that checks multiply/divide of the following intervals by numeric:
> # INTERVAL DAY
> # INTERVAL DAY TO HOUR
> # INTERVAL DAY TO MINUTE
> # INTERVAL HOUR
> # INTERVAL HOUR TO MINUTE
> # INTERVAL HOUR TO SECOND
> # INTERVAL MINUTE
> # INTERVAL MINUTE TO SECOND
> # INTERVAL SECOND



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35732) Parse DayTimeIntervalType from JSON

2021-06-16 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364196#comment-17364196
 ] 

PengLei commented on SPARK-35732:
-

[~angerszhu] That’s all right.

> Parse DayTimeIntervalType from JSON
> ---
>
> Key: SPARK-35732
> URL: https://issues.apache.org/jira/browse/SPARK-35732
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Allow (de-)serialization of the DayTimeIntervalType class to JSON. See 
> DataType.parseDataType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35732) Parse DayTimeIntervalType from JSON

2021-06-16 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364162#comment-17364162
 ] 

PengLei commented on SPARK-35732:
-

working on this

> Parse DayTimeIntervalType from JSON
> ---
>
> Key: SPARK-35732
> URL: https://issues.apache.org/jira/browse/SPARK-35732
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Allow (de-)serialization of the DayTimeIntervalType class to JSON. See 
> DataType.parseDataType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35727) Return INTERVAL DAY from dates subtraction

2021-06-16 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17364159#comment-17364159
 ] 

PengLei commented on SPARK-35727:
-

working on this

> Return INTERVAL DAY from dates subtraction
> --
>
> Key: SPARK-35727
> URL: https://issues.apache.org/jira/browse/SPARK-35727
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Type of dates subtraction should be INTERVAL DAY (DayTimeIntervalType(DAY, 
> DAY)).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-33898) Support SHOW CREATE TABLE in v2

2021-05-29 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353688#comment-17353688
 ] 

PengLei commented on SPARK-33898:
-

[~Qin Yao] sure

> Support SHOW CREATE TABLE in v2
> ---
>
> Key: SPARK-33898
> URL: https://issues.apache.org/jira/browse/SPARK-33898
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Kent Yao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35243) Support columnar execution on ANSI interval types

2021-04-27 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17334393#comment-17334393
 ] 

PengLei commented on SPARK-35243:
-

Working on this

> Support columnar execution on ANSI interval types
> -
>
> Key: SPARK-35243
> URL: https://issues.apache.org/jira/browse/SPARK-35243
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> See SPARK-30066 as reference implementation for CalendarIntervalType ()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34878) Test actual size of year-month and day-time intervals

2021-04-26 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332067#comment-17332067
 ] 

PengLei commented on SPARK-34878:
-

[~maxgekk] Can I do it?

> Test actual size of year-month and day-time intervals
> -
>
> Key: SPARK-34878
> URL: https://issues.apache.org/jira/browse/SPARK-34878
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Add tests to 
> https://github.com/apache/spark/blob/3a299aa6480ac22501512cd0310d31a441d7dfdc/sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/ColumnTypeSuite.scala#L52



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35139) Support ANSI intervals as Arrow Column vectors

2021-04-24 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331378#comment-17331378
 ] 

PengLei commented on SPARK-35139:
-

I am working on this

> Support ANSI intervals as Arrow Column vectors
> --
>
> Key: SPARK-35139
> URL: https://issues.apache.org/jira/browse/SPARK-35139
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Max Gekk
>Priority: Major
>
> Extend ArrowColumnVector, and support YearMonthIntervalType and 
> DayTimeIntervalType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35094) Spark from_json(JsonToStruct) function return wrong value in permissive mode in case best effort

2021-04-20 Thread PengLei (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325655#comment-17325655
 ] 

PengLei commented on SPARK-35094:
-

I tried to locate the root cause and found that the error was related to a 
[modification|[https://github.com/apache/spark/pull/30032/].]

In [SPARK-33134|https://issues.apache.org/jira/browse/SPARK-33134],in order to 
avoid raise an exception when parse the bad nested json field, the isRoot 
judgment is added. So when isRoot == false, it does not throw exceptions and 
does not skip children.

 
{code:java}
// code placeholder
while (nextUntil(parser, JsonToken.END_OBJECT)) 
{  
...
case NonFatal(e) if isRoot => 
 badRecordException = badRecordException.orElse(Some(e))
 parser.skipChildren()
}

{code}
I think if isRoot == false, skipChildren is needed to ensure the correctness of 
the data.

 
h1.  

[~hryhoriev.nick] cc @Max Gekk Please review the PR 
[https://github.com/apache/spark/pull/32252] 

 

Thanks

 

> Spark from_json(JsonToStruct)  function return wrong value in permissive mode 
> in case best effort
> -
>
> Key: SPARK-35094
> URL: https://issues.apache.org/jira/browse/SPARK-35094
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, SQL
>Affects Versions: 3.0.2, 3.1.1
>Reporter: Nick Hryhoriev
>Priority: Critical
>
> I use spark 3.1.1 and 3.0.2.
>  Function `from_json` return wrong result with Permissive mode.
>  In corner case:
>  1. Json message has complex nested structure
>  \{sameNameField)damaged, nestedVal:{badSchemaNestedVal, 
> sameNameField_WhichValueWillAppearInwrongPlace}}
>  2. Nested -> Nested Field: Schema is satisfy align with value in json.
> scala code to reproduce:
> {code:java}
> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.functions.from_json
> import org.apache.spark.sql.types.IntegerType
> import org.apache.spark.sql.types.StringType
> import org.apache.spark.sql.types.StructField
> import org.apache.spark.sql.types.StructType
> object Main {
>   def main(args: Array[String]): Unit = {
> implicit val spark: SparkSession = 
> SparkSession.builder().master("local[*]").getOrCreate()
> import spark.implicits._
> val schemaForFieldWhichWillHaveWrongValue = 
> StructField("problematicName", StringType, nullable = true)
> val nestedFieldWhichNotSatisfyJsonMessage = StructField(
>   "badNestedField",
>   StructType(Seq(StructField("SomethingWhichNotInJsonMessage", 
> IntegerType, nullable = true)))
> )
> val nestedFieldWithNestedFieldWhichNotSatisfyJsonMessage =
>   StructField(
> "nestedField",
> StructType(Seq(nestedFieldWhichNotSatisfyJsonMessage, 
> schemaForFieldWhichWillHaveWrongValue))
>   )
> val customSchema = StructType(Seq(
>   schemaForFieldWhichWillHaveWrongValue,
>   nestedFieldWithNestedFieldWhichNotSatisfyJsonMessage
> ))
> val jsonStringToTest =
>   
> """{"problematicName":"ThisValueWillBeOverwritten","nestedField":{"badNestedField":"14","problematicName":"thisValueInTwoPlaces"}}"""
> val df = List(jsonStringToTest)
>   .toDF("json")
>   // issue happen only in permissive mode during best effort
>   .select(from_json($"json", customSchema).as("toBeFlatten"))
>   .select("toBeFlatten.*")
> df.show(truncate = false)
> assert(
>   df.select("problematicName").as[String].first() == 
> "ThisValueWillBeOverwritten",
>   "wrong value in root schema, parser take value from column with same 
> name but in another nested elvel"
> )
>   }
> }
> {code}
> I was not able to debug this issue, to find the exact root cause.
>  But what I find in debug, that In 
> `org.apache.spark.sql.catalyst.util.FailureSafeParser` in line 64. code block 
> `e.partialResult()` already have a wrong value.
> I hope this will help to fix the issue.
> I do a DIRTY HACK to fix the issue.
>  I just fork this function and hardcode `None` -> `Iterator(toResultRow(None, 
> e.record))`.
>  In my case, it's better to do not have any values in the row, than 
> theoretically have a wrong value in some column.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org