[jira] [Commented] (SPARK-33106) Fix sbt resolvers clash

2021-01-16 Thread Alexander Bessonov (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266706#comment-17266706
 ] 

Alexander Bessonov commented on SPARK-33106:


My bad. I had the following in my environment that caused the issue. Builds 
fine without it.
{code:java}
SBT_OPTS="-Dsbt.override.build.repos=true"{code}
 

> Fix sbt resolvers clash
> ---
>
> Key: SPARK-33106
> URL: https://issues.apache.org/jira/browse/SPARK-33106
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.1.0
>Reporter: Denis Pyshev
>Assignee: Denis Pyshev
>Priority: Minor
> Fix For: 3.1.0
>
>
> During sbt upgrade from 0.13 to 1.x, exact resolvers list was used as is.
> That leads to local resolvers name clashing, which is observed as warning 
> from SBT:
> {code:java}
> [warn] Multiple resolvers having different access mechanism configured with 
> same name 'local'. To avoid conflict, Remove duplicate project resolvers 
> (`resolvers`) or rename publishing resolve
> r (`publishTo`).
> {code}
> This needs to be fixed to avoid potential errors and reduce log noise.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34142) Support Fallback Storage Cleanup during stopping SparkContext

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266704#comment-17266704
 ] 

Apache Spark commented on SPARK-34142:
--

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/31215

> Support Fallback Storage Cleanup during stopping SparkContext
> -
>
> Key: SPARK-34142
> URL: https://issues.apache.org/jira/browse/SPARK-34142
> Project: Spark
>  Issue Type: New Feature
>  Components: Spark Core
>Affects Versions: 3.2.0
>Reporter: Dongjoon Hyun
>Priority: Major
>
> SPARK-33545 added `Support Fallback Storage during worker decommission` for 
> the managed cloud-storage with TTL support. This issue aims to add additional 
> clean-up feature during stopping SparkContext to save some money before TTL 
> or the other HDFS-compatible storage which doesn't have TTL support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34142) Support Fallback Storage Cleanup during stopping SparkContext

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34142:


Assignee: Apache Spark

> Support Fallback Storage Cleanup during stopping SparkContext
> -
>
> Key: SPARK-34142
> URL: https://issues.apache.org/jira/browse/SPARK-34142
> Project: Spark
>  Issue Type: New Feature
>  Components: Spark Core
>Affects Versions: 3.2.0
>Reporter: Dongjoon Hyun
>Assignee: Apache Spark
>Priority: Major
>
> SPARK-33545 added `Support Fallback Storage during worker decommission` for 
> the managed cloud-storage with TTL support. This issue aims to add additional 
> clean-up feature during stopping SparkContext to save some money before TTL 
> or the other HDFS-compatible storage which doesn't have TTL support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34142) Support Fallback Storage Cleanup during stopping SparkContext

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266703#comment-17266703
 ] 

Apache Spark commented on SPARK-34142:
--

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/31215

> Support Fallback Storage Cleanup during stopping SparkContext
> -
>
> Key: SPARK-34142
> URL: https://issues.apache.org/jira/browse/SPARK-34142
> Project: Spark
>  Issue Type: New Feature
>  Components: Spark Core
>Affects Versions: 3.2.0
>Reporter: Dongjoon Hyun
>Priority: Major
>
> SPARK-33545 added `Support Fallback Storage during worker decommission` for 
> the managed cloud-storage with TTL support. This issue aims to add additional 
> clean-up feature during stopping SparkContext to save some money before TTL 
> or the other HDFS-compatible storage which doesn't have TTL support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34142) Support Fallback Storage Cleanup during stopping SparkContext

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34142:


Assignee: (was: Apache Spark)

> Support Fallback Storage Cleanup during stopping SparkContext
> -
>
> Key: SPARK-34142
> URL: https://issues.apache.org/jira/browse/SPARK-34142
> Project: Spark
>  Issue Type: New Feature
>  Components: Spark Core
>Affects Versions: 3.2.0
>Reporter: Dongjoon Hyun
>Priority: Major
>
> SPARK-33545 added `Support Fallback Storage during worker decommission` for 
> the managed cloud-storage with TTL support. This issue aims to add additional 
> clean-up feature during stopping SparkContext to save some money before TTL 
> or the other HDFS-compatible storage which doesn't have TTL support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34142) Support Fallback Storage Cleanup during stopping SparkContext

2021-01-16 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-34142:
-

 Summary: Support Fallback Storage Cleanup during stopping 
SparkContext
 Key: SPARK-34142
 URL: https://issues.apache.org/jira/browse/SPARK-34142
 Project: Spark
  Issue Type: New Feature
  Components: Spark Core
Affects Versions: 3.2.0
Reporter: Dongjoon Hyun


SPARK-33545 added `Support Fallback Storage during worker decommission` for the 
managed cloud-storage with TTL support. This issue aims to add additional 
clean-up feature during stopping SparkContext to save some money before TTL or 
the other HDFS-compatible storage which doesn't have TTL support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34111) Deconflict the jars jakarta.servlet-api-4.0.3.jar and javax.servlet-api-3.1.0.jar

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266702#comment-17266702
 ] 

Apache Spark commented on SPARK-34111:
--

User 'yaooqinn' has created a pull request for this issue:
https://github.com/apache/spark/pull/31214

> Deconflict the jars jakarta.servlet-api-4.0.3.jar and 
> javax.servlet-api-3.1.0.jar
> -
>
> Key: SPARK-34111
> URL: https://issues.apache.org/jira/browse/SPARK-34111
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.1.0
>Reporter: Hyukjin Kwon
>Priority: Blocker
>
> After SPARK-33705, we now happened to have two jars in the release artifact 
> with Hadoop 3:
> {{dev/deps/spark-deps-hadoop-3.2-hive-2.3}}:
> {code}
> ...
> jakarta.servlet-api/4.0.3//jakarta.servlet-api-4.0.3.jar
> ...
> javax.servlet-api/3.1.0//javax.servlet-api-3.1.0.jar
> ...
> {code}
> It can potentially cause an issue, and we should better remove 
> {{javax.servlet-api-3.1.0.jar}} which is apparently only required for YARN 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34111) Deconflict the jars jakarta.servlet-api-4.0.3.jar and javax.servlet-api-3.1.0.jar

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266701#comment-17266701
 ] 

Apache Spark commented on SPARK-34111:
--

User 'yaooqinn' has created a pull request for this issue:
https://github.com/apache/spark/pull/31214

> Deconflict the jars jakarta.servlet-api-4.0.3.jar and 
> javax.servlet-api-3.1.0.jar
> -
>
> Key: SPARK-34111
> URL: https://issues.apache.org/jira/browse/SPARK-34111
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.1.0
>Reporter: Hyukjin Kwon
>Priority: Blocker
>
> After SPARK-33705, we now happened to have two jars in the release artifact 
> with Hadoop 3:
> {{dev/deps/spark-deps-hadoop-3.2-hive-2.3}}:
> {code}
> ...
> jakarta.servlet-api/4.0.3//jakarta.servlet-api-4.0.3.jar
> ...
> javax.servlet-api/3.1.0//javax.servlet-api-3.1.0.jar
> ...
> {code}
> It can potentially cause an issue, and we should better remove 
> {{javax.servlet-api-3.1.0.jar}} which is apparently only required for YARN 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34141) ExtractGenerator analyzer should handle lazy projectlists

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266683#comment-17266683
 ] 

Apache Spark commented on SPARK-34141:
--

User 'tanelk' has created a pull request for this issue:
https://github.com/apache/spark/pull/31213

> ExtractGenerator analyzer should handle lazy projectlists
> -
>
> Key: SPARK-34141
> URL: https://issues.apache.org/jira/browse/SPARK-34141
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Tanel Kiis
>Priority: Major
>
> With the dataframe api it is possible to have a lazy sequence as the output 
> field on a LogicalPlan class. When exploding a column on this dataframe using 
> the withColumn method, the ExtractGenerator does not extract the generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34141) ExtractGenerator analyzer should handle lazy projectlists

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266682#comment-17266682
 ] 

Apache Spark commented on SPARK-34141:
--

User 'tanelk' has created a pull request for this issue:
https://github.com/apache/spark/pull/31213

> ExtractGenerator analyzer should handle lazy projectlists
> -
>
> Key: SPARK-34141
> URL: https://issues.apache.org/jira/browse/SPARK-34141
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Tanel Kiis
>Priority: Major
>
> With the dataframe api it is possible to have a lazy sequence as the output 
> field on a LogicalPlan class. When exploding a column on this dataframe using 
> the withColumn method, the ExtractGenerator does not extract the generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34141) ExtractGenerator analyzer should handle lazy projectlists

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34141:


Assignee: Apache Spark

> ExtractGenerator analyzer should handle lazy projectlists
> -
>
> Key: SPARK-34141
> URL: https://issues.apache.org/jira/browse/SPARK-34141
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Tanel Kiis
>Assignee: Apache Spark
>Priority: Major
>
> With the dataframe api it is possible to have a lazy sequence as the output 
> field on a LogicalPlan class. When exploding a column on this dataframe using 
> the withColumn method, the ExtractGenerator does not extract the generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34141) ExtractGenerator analyzer should handle lazy projectlists

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34141:


Assignee: (was: Apache Spark)

> ExtractGenerator analyzer should handle lazy projectlists
> -
>
> Key: SPARK-34141
> URL: https://issues.apache.org/jira/browse/SPARK-34141
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Tanel Kiis
>Priority: Major
>
> With the dataframe api it is possible to have a lazy sequence as the output 
> field on a LogicalPlan class. When exploding a column on this dataframe using 
> the withColumn method, the ExtractGenerator does not extract the generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34141) ExtractGenerator analyzer should handle lazy projectlists

2021-01-16 Thread Tanel Kiis (Jira)
Tanel Kiis created SPARK-34141:
--

 Summary: ExtractGenerator analyzer should handle lazy projectlists
 Key: SPARK-34141
 URL: https://issues.apache.org/jira/browse/SPARK-34141
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
Reporter: Tanel Kiis


With the dataframe api it is possible to have a lazy sequence as the output 
field on a LogicalPlan class. When exploding a column on this dataframe using 
the withColumn method, the ExtractGenerator does not extract the generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34140) Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to org/apache/spark/sql/errors

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266672#comment-17266672
 ] 

Apache Spark commented on SPARK-34140:
--

User 'imback82' has created a pull request for this issue:
https://github.com/apache/spark/pull/31212

> Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to 
> org/apache/spark/sql/errors
> ---
>
> Key: SPARK-34140
> URL: https://issues.apache.org/jira/browse/SPARK-34140
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Minor
>
> QueryCompilationErrors.scala and QueryExecutionErrors.scala use the 
> org.apache.spark.sql.errors package, but these files are reside in 
> org/apache/spark/sql directory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34140) Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to org/apache/spark/sql/errors

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266658#comment-17266658
 ] 

Apache Spark commented on SPARK-34140:
--

User 'imback82' has created a pull request for this issue:
https://github.com/apache/spark/pull/31211

> Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to 
> org/apache/spark/sql/errors
> ---
>
> Key: SPARK-34140
> URL: https://issues.apache.org/jira/browse/SPARK-34140
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Minor
>
> QueryCompilationErrors.scala and QueryExecutionErrors.scala use the 
> org.apache.spark.sql.errors package, but these files are reside in 
> org/apache/spark/sql directory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34140) Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to org/apache/spark/sql/errors

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34140:


Assignee: Apache Spark

> Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to 
> org/apache/spark/sql/errors
> ---
>
> Key: SPARK-34140
> URL: https://issues.apache.org/jira/browse/SPARK-34140
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Assignee: Apache Spark
>Priority: Minor
>
> QueryCompilationErrors.scala and QueryExecutionErrors.scala use the 
> org.apache.spark.sql.errors package, but these files are reside in 
> org/apache/spark/sql directory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34140) Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to org/apache/spark/sql/errors

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34140:


Assignee: (was: Apache Spark)

> Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to 
> org/apache/spark/sql/errors
> ---
>
> Key: SPARK-34140
> URL: https://issues.apache.org/jira/browse/SPARK-34140
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Minor
>
> QueryCompilationErrors.scala and QueryExecutionErrors.scala use the 
> org.apache.spark.sql.errors package, but these files are reside in 
> org/apache/spark/sql directory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-34140) Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to org/apache/spark/sql/errors

2021-01-16 Thread Terry Kim (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Kim updated SPARK-34140:
--
Summary: Move QueryCompilationErrors.scala and QueryExecutionErrors.scala 
to org/apache/spark/sql/errors  (was: Move QueryCompilationErrors.scala and 
QueryExecutionErrors.scala to Create org/apache/spark/sql/errors)

> Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to 
> org/apache/spark/sql/errors
> ---
>
> Key: SPARK-34140
> URL: https://issues.apache.org/jira/browse/SPARK-34140
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Minor
>
> QueryCompilationErrors.scala and QueryExecutionErrors.scala use the 
> org.apache.spark.sql.errors package, but these files are reside in 
> org/apache/spark/sql directory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34140) Move QueryCompilationErrors.scala and QueryExecutionErrors.scala to Create org/apache/spark/sql/errors

2021-01-16 Thread Terry Kim (Jira)
Terry Kim created SPARK-34140:
-

 Summary: Move QueryCompilationErrors.scala and 
QueryExecutionErrors.scala to Create org/apache/spark/sql/errors
 Key: SPARK-34140
 URL: https://issues.apache.org/jira/browse/SPARK-34140
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.2.0
Reporter: Terry Kim


QueryCompilationErrors.scala and QueryExecutionErrors.scala use the 
org.apache.spark.sql.errors package, but these files are reside in 
org/apache/spark/sql directory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34139) UnresolvedRelation should retain SQL text position

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266650#comment-17266650
 ] 

Apache Spark commented on SPARK-34139:
--

User 'imback82' has created a pull request for this issue:
https://github.com/apache/spark/pull/31209

> UnresolvedRelation should retain SQL text position
> --
>
> Key: SPARK-34139
> URL: https://issues.apache.org/jira/browse/SPARK-34139
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Major
>
> UnresolvedRelation should retain SQL text position. The following commands 
> will be handled:
> {code:java}
> CACHE TABLE unknown
> UNCACHE TABLE unknown
> DELETE FROM unknown
> UPDATE unknown SET name='abc'
> MERGE INTO unknown1 AS target USING unknown2 AS source ON target.col = 
> source.col WHEN MATCHED THEN DELETE
> INSERT INTO TABLE unknown SELECT 1
> INSERT OVERWRITE TABLE unknown VALUES (1, 'a')
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34139) UnresolvedRelation should retain SQL text position

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34139:


Assignee: (was: Apache Spark)

> UnresolvedRelation should retain SQL text position
> --
>
> Key: SPARK-34139
> URL: https://issues.apache.org/jira/browse/SPARK-34139
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Major
>
> UnresolvedRelation should retain SQL text position. The following commands 
> will be handled:
> {code:java}
> CACHE TABLE unknown
> UNCACHE TABLE unknown
> DELETE FROM unknown
> UPDATE unknown SET name='abc'
> MERGE INTO unknown1 AS target USING unknown2 AS source ON target.col = 
> source.col WHEN MATCHED THEN DELETE
> INSERT INTO TABLE unknown SELECT 1
> INSERT OVERWRITE TABLE unknown VALUES (1, 'a')
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34139) UnresolvedRelation should retain SQL text position

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34139:


Assignee: Apache Spark

> UnresolvedRelation should retain SQL text position
> --
>
> Key: SPARK-34139
> URL: https://issues.apache.org/jira/browse/SPARK-34139
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Assignee: Apache Spark
>Priority: Major
>
> UnresolvedRelation should retain SQL text position. The following commands 
> will be handled:
> {code:java}
> CACHE TABLE unknown
> UNCACHE TABLE unknown
> DELETE FROM unknown
> UPDATE unknown SET name='abc'
> MERGE INTO unknown1 AS target USING unknown2 AS source ON target.col = 
> source.col WHEN MATCHED THEN DELETE
> INSERT INTO TABLE unknown SELECT 1
> INSERT OVERWRITE TABLE unknown VALUES (1, 'a')
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34139) UnresolvedRelation should retain SQL text position

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266649#comment-17266649
 ] 

Apache Spark commented on SPARK-34139:
--

User 'imback82' has created a pull request for this issue:
https://github.com/apache/spark/pull/31209

> UnresolvedRelation should retain SQL text position
> --
>
> Key: SPARK-34139
> URL: https://issues.apache.org/jira/browse/SPARK-34139
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Terry Kim
>Priority: Major
>
> UnresolvedRelation should retain SQL text position. The following commands 
> will be handled:
> {code:java}
> CACHE TABLE unknown
> UNCACHE TABLE unknown
> DELETE FROM unknown
> UPDATE unknown SET name='abc'
> MERGE INTO unknown1 AS target USING unknown2 AS source ON target.col = 
> source.col WHEN MATCHED THEN DELETE
> INSERT INTO TABLE unknown SELECT 1
> INSERT OVERWRITE TABLE unknown VALUES (1, 'a')
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34139) UnresolvedRelation should retain SQL text position

2021-01-16 Thread Terry Kim (Jira)
Terry Kim created SPARK-34139:
-

 Summary: UnresolvedRelation should retain SQL text position
 Key: SPARK-34139
 URL: https://issues.apache.org/jira/browse/SPARK-34139
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: Terry Kim


UnresolvedRelation should retain SQL text position. The following commands will 
be handled:
{code:java}
CACHE TABLE unknown
UNCACHE TABLE unknown
DELETE FROM unknown
UPDATE unknown SET name='abc'
MERGE INTO unknown1 AS target USING unknown2 AS source ON target.col = 
source.col WHEN MATCHED THEN DELETE
INSERT INTO TABLE unknown SELECT 1
INSERT OVERWRITE TABLE unknown VALUES (1, 'a')
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34138) Keep dependants cached while refreshing v1 tables

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266611#comment-17266611
 ] 

Apache Spark commented on SPARK-34138:
--

User 'MaxGekk' has created a pull request for this issue:
https://github.com/apache/spark/pull/31206

> Keep dependants cached while refreshing v1 tables
> -
>
> Key: SPARK-34138
> URL: https://issues.apache.org/jira/browse/SPARK-34138
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Maxim Gekk
>Priority: Major
>
> Keeping dependants cached while refreshing v1 tables should allow to improve 
> user experience with table/view caching. For example, let's imagine that an 
> user has cached v1 table and cached view based on the table. And the user 
> passed the table to external library which drops/renames/adds partitions in 
> the v1 table. Unfortunately, the user gets the view uncached after that even 
> he/she hasn't uncached the view explicitly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34138) Keep dependants cached while refreshing v1 tables

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34138:


Assignee: (was: Apache Spark)

> Keep dependants cached while refreshing v1 tables
> -
>
> Key: SPARK-34138
> URL: https://issues.apache.org/jira/browse/SPARK-34138
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Maxim Gekk
>Priority: Major
>
> Keeping dependants cached while refreshing v1 tables should allow to improve 
> user experience with table/view caching. For example, let's imagine that an 
> user has cached v1 table and cached view based on the table. And the user 
> passed the table to external library which drops/renames/adds partitions in 
> the v1 table. Unfortunately, the user gets the view uncached after that even 
> he/she hasn't uncached the view explicitly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34138) Keep dependants cached while refreshing v1 tables

2021-01-16 Thread Maxim Gekk (Jira)
Maxim Gekk created SPARK-34138:
--

 Summary: Keep dependants cached while refreshing v1 tables
 Key: SPARK-34138
 URL: https://issues.apache.org/jira/browse/SPARK-34138
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: Maxim Gekk


Keeping dependants cached while refreshing v1 tables should allow to improve 
user experience with table/view caching. For example, let's imagine that an 
user has cached v1 table and cached view based on the table. And the user 
passed the table to external library which drops/renames/adds partitions in the 
v1 table. Unfortunately, the user gets the view uncached after that even he/she 
hasn't uncached the view explicitly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34137) The tree string does not contain statistics for nested scalar sub queries

2021-01-16 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266610#comment-17266610
 ] 

Yuming Wang commented on SPARK-34137:
-

cc [~maxgekk]

> The tree string does not contain statistics for nested scalar sub queries
> -
>
> Key: SPARK-34137
> URL: https://issues.apache.org/jira/browse/SPARK-34137
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:scala}
> spark.sql("create table t1 using parquet as select id as a, id as b from 
> range(1000)")
> spark.sql("create table t2 using parquet as select id as c, id as d from 
> range(2000)")
> spark.sql("ANALYZE TABLE t1 COMPUTE STATISTICS FOR ALL COLUMNS")
> spark.sql("ANALYZE TABLE t2 COMPUTE STATISTICS FOR ALL COLUMNS")
> spark.sql("set spark.sql.cbo.enabled=true")
> spark.sql(
>   """
> |WITH max_store_sales AS
> |  (SELECT max(csales) tpcds_cmax
> |  FROM (SELECT
> |sum(b) csales
> |  FROM t1 WHERE a < 100 ) x),
> |best_ss_customer AS
> |  (SELECT
> |c
> |  FROM t2
> |  WHERE d > (SELECT * FROM max_store_sales))
> |
> |SELECT c FROM best_ss_customer
> |""".stripMargin).explain("cost")
> {code}
> Output:
> {noformat}
> == Optimized Logical Plan ==
> Project [c#4263L], Statistics(sizeInBytes=31.3 KiB, rowCount=2.00E+3)
> +- Filter (isnotnull(d#4264L) AND (d#4264L > scalar-subquery#4262 [])), 
> Statistics(sizeInBytes=46.9 KiB, rowCount=2.00E+3)
>:  +- Aggregate [max(csales#4260L) AS tpcds_cmax#4261L]
>: +- Aggregate [sum(b#4266L) AS csales#4260L]
>:+- Project [b#4266L]
>:   +- Filter ((a#4265L < 100) AND isnotnull(a#4265L))
>:  +- Relation default.t1[a#4265L,b#4266L] parquet, 
> Statistics(sizeInBytes=23.4 KiB, rowCount=1.00E+3)
>+- Relation default.t2[c#4263L,d#4264L] parquet, 
> Statistics(sizeInBytes=46.9 KiB, rowCount=2.00E+3)
> {noformat}
> Another case is TPC-DS q23a.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34137) The tree string does not contain statistics for nested scalar sub queries

2021-01-16 Thread Yuming Wang (Jira)
Yuming Wang created SPARK-34137:
---

 Summary: The tree string does not contain statistics for nested 
scalar sub queries
 Key: SPARK-34137
 URL: https://issues.apache.org/jira/browse/SPARK-34137
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.2.0
Reporter: Yuming Wang


How to reproduce:
{code:scala}
spark.sql("create table t1 using parquet as select id as a, id as b from 
range(1000)")
spark.sql("create table t2 using parquet as select id as c, id as d from 
range(2000)")

spark.sql("ANALYZE TABLE t1 COMPUTE STATISTICS FOR ALL COLUMNS")
spark.sql("ANALYZE TABLE t2 COMPUTE STATISTICS FOR ALL COLUMNS")
spark.sql("set spark.sql.cbo.enabled=true")

spark.sql(
  """
|WITH max_store_sales AS
|  (SELECT max(csales) tpcds_cmax
|  FROM (SELECT
|sum(b) csales
|  FROM t1 WHERE a < 100 ) x),
|best_ss_customer AS
|  (SELECT
|c
|  FROM t2
|  WHERE d > (SELECT * FROM max_store_sales))
|
|SELECT c FROM best_ss_customer
|""".stripMargin).explain("cost")
{code}

Output:
{noformat}
== Optimized Logical Plan ==
Project [c#4263L], Statistics(sizeInBytes=31.3 KiB, rowCount=2.00E+3)
+- Filter (isnotnull(d#4264L) AND (d#4264L > scalar-subquery#4262 [])), 
Statistics(sizeInBytes=46.9 KiB, rowCount=2.00E+3)
   :  +- Aggregate [max(csales#4260L) AS tpcds_cmax#4261L]
   : +- Aggregate [sum(b#4266L) AS csales#4260L]
   :+- Project [b#4266L]
   :   +- Filter ((a#4265L < 100) AND isnotnull(a#4265L))
   :  +- Relation default.t1[a#4265L,b#4266L] parquet, 
Statistics(sizeInBytes=23.4 KiB, rowCount=1.00E+3)
   +- Relation default.t2[c#4263L,d#4264L] parquet, Statistics(sizeInBytes=46.9 
KiB, rowCount=2.00E+3)
{noformat}

Another case is TPC-DS q23a.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34136) Support complex types in pyspark.sql.functions.lit

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34136:


Assignee: (was: Apache Spark)

> Support complex types in pyspark.sql.functions.lit
> --
>
> Key: SPARK-34136
> URL: https://issues.apache.org/jira/browse/SPARK-34136
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Maciej Szymkiewicz
>Priority: Major
>
> At the moment, Python users have to use dedicated function to create complex 
> literal column. For example to create an array:
> {code:python}
> from pyspark.sql.functions import array, lit
> xs = [1, 2, 3]
> array(*[lit(x) for x in xs])
> {code}
> or map
> {code:python}
> from pyspark.sql.functions import create_map, lit, map_from_arrays
> from itertools import chain
> kvs = {"a": 1, "b": 2}
> create_map(*chain.from_iterable(
> (lit(k), lit(v)) for k, v in kvs.items()
> ))
> # or
> map_from_arrays(
> array(*[lit(k) for k in kvs.keys()]),
> array(*[lit(v) for v in kvs.values()])
> )
> {code}
> This is very verbose for such simple task. 
> In Scala we have `typedLit` that addresses such cases
> {code:scala}
> scala> typedLit(Map("a" -> 1, "b" -> 2))
> res0: org.apache.spark.sql.Column = keys: [a,b], values: [1,2]
> scala> typedLit(Array(1, 2, 3))
> res1: org.apache.spark.sql.Column = [1,2,3]
> {code}
> but its API is not Python-friendly.
> It would be nice if {{lit}} could cover at least basic complex types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34136) Support complex types in pyspark.sql.functions.lit

2021-01-16 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266609#comment-17266609
 ] 

Apache Spark commented on SPARK-34136:
--

User 'zero323' has created a pull request for this issue:
https://github.com/apache/spark/pull/31207

> Support complex types in pyspark.sql.functions.lit
> --
>
> Key: SPARK-34136
> URL: https://issues.apache.org/jira/browse/SPARK-34136
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Maciej Szymkiewicz
>Priority: Major
>
> At the moment, Python users have to use dedicated function to create complex 
> literal column. For example to create an array:
> {code:python}
> from pyspark.sql.functions import array, lit
> xs = [1, 2, 3]
> array(*[lit(x) for x in xs])
> {code}
> or map
> {code:python}
> from pyspark.sql.functions import create_map, lit, map_from_arrays
> from itertools import chain
> kvs = {"a": 1, "b": 2}
> create_map(*chain.from_iterable(
> (lit(k), lit(v)) for k, v in kvs.items()
> ))
> # or
> map_from_arrays(
> array(*[lit(k) for k in kvs.keys()]),
> array(*[lit(v) for v in kvs.values()])
> )
> {code}
> This is very verbose for such simple task. 
> In Scala we have `typedLit` that addresses such cases
> {code:scala}
> scala> typedLit(Map("a" -> 1, "b" -> 2))
> res0: org.apache.spark.sql.Column = keys: [a,b], values: [1,2]
> scala> typedLit(Array(1, 2, 3))
> res1: org.apache.spark.sql.Column = [1,2,3]
> {code}
> but its API is not Python-friendly.
> It would be nice if {{lit}} could cover at least basic complex types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-34136) Support complex types in pyspark.sql.functions.lit

2021-01-16 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34136:


Assignee: Apache Spark

> Support complex types in pyspark.sql.functions.lit
> --
>
> Key: SPARK-34136
> URL: https://issues.apache.org/jira/browse/SPARK-34136
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Maciej Szymkiewicz
>Assignee: Apache Spark
>Priority: Major
>
> At the moment, Python users have to use dedicated function to create complex 
> literal column. For example to create an array:
> {code:python}
> from pyspark.sql.functions import array, lit
> xs = [1, 2, 3]
> array(*[lit(x) for x in xs])
> {code}
> or map
> {code:python}
> from pyspark.sql.functions import create_map, lit, map_from_arrays
> from itertools import chain
> kvs = {"a": 1, "b": 2}
> create_map(*chain.from_iterable(
> (lit(k), lit(v)) for k, v in kvs.items()
> ))
> # or
> map_from_arrays(
> array(*[lit(k) for k in kvs.keys()]),
> array(*[lit(v) for v in kvs.values()])
> )
> {code}
> This is very verbose for such simple task. 
> In Scala we have `typedLit` that addresses such cases
> {code:scala}
> scala> typedLit(Map("a" -> 1, "b" -> 2))
> res0: org.apache.spark.sql.Column = keys: [a,b], values: [1,2]
> scala> typedLit(Array(1, 2, 3))
> res1: org.apache.spark.sql.Column = [1,2,3]
> {code}
> but its API is not Python-friendly.
> It would be nice if {{lit}} could cover at least basic complex types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34136) Support complex types in pyspark.sql.functions.lit

2021-01-16 Thread Maciej Szymkiewicz (Jira)
Maciej Szymkiewicz created SPARK-34136:
--

 Summary: Support complex types in pyspark.sql.functions.lit
 Key: SPARK-34136
 URL: https://issues.apache.org/jira/browse/SPARK-34136
 Project: Spark
  Issue Type: Improvement
  Components: PySpark, SQL
Affects Versions: 3.2.0, 3.1.1
Reporter: Maciej Szymkiewicz


At the moment, Python users have to use dedicated function to create complex 
literal column. For example to create an array:

{code:python}
from pyspark.sql.functions import array, lit

xs = [1, 2, 3]
array(*[lit(x) for x in xs])
{code}

or map

{code:python}
from pyspark.sql.functions import create_map, lit, map_from_arrays
from itertools import chain

kvs = {"a": 1, "b": 2}

create_map(*chain.from_iterable(
(lit(k), lit(v)) for k, v in kvs.items()
))

# or

map_from_arrays(
array(*[lit(k) for k in kvs.keys()]),
array(*[lit(v) for v in kvs.values()])
)
{code}

This is very verbose for such simple task. 

In Scala we have `typedLit` that addresses such cases

{code:scala}
scala> typedLit(Map("a" -> 1, "b" -> 2))
res0: org.apache.spark.sql.Column = keys: [a,b], values: [1,2]

scala> typedLit(Array(1, 2, 3))
res1: org.apache.spark.sql.Column = [1,2,3]

{code}


but its API is not Python-friendly.

It would be nice if {{lit}} could cover at least basic complex types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Pis closed SPARK-34135.
-

it just a test

> hello world
> ---
>
> Key: SPARK-34135
> URL: https://issues.apache.org/jira/browse/SPARK-34135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.1
> Environment: test
>Reporter: Kevin Pis
>Priority: Major
> Fix For: 2.4.8
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Pis resolved SPARK-34135.
---
Resolution: Not A Problem

it just a test

> hello world
> ---
>
> Key: SPARK-34135
> URL: https://issues.apache.org/jira/browse/SPARK-34135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.1
> Environment: test
>Reporter: Kevin Pis
>Priority: Major
> Fix For: 2.4.8
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Pis updated SPARK-34135:
--
Environment: test  (was: fsdfdsf)

> hello world
> ---
>
> Key: SPARK-34135
> URL: https://issues.apache.org/jira/browse/SPARK-34135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.1
> Environment: test
>Reporter: Kevin Pis
>Priority: Major
> Fix For: 2.4.8
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Pis updated SPARK-34135:
--
Docs Text: test  (was: fdsfsdf)

> hello world
> ---
>
> Key: SPARK-34135
> URL: https://issues.apache.org/jira/browse/SPARK-34135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.1
> Environment: fsdfdsf
>Reporter: Kevin Pis
>Priority: Major
> Fix For: 2.4.8
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> fsfs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Pis updated SPARK-34135:
--
Description: test  (was: fsfs)

> hello world
> ---
>
> Key: SPARK-34135
> URL: https://issues.apache.org/jira/browse/SPARK-34135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.1
> Environment: fsdfdsf
>Reporter: Kevin Pis
>Priority: Major
> Fix For: 2.4.8
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266537#comment-17266537
 ] 

Kevin Pis commented on SPARK-34135:
---

I just want to test the feature about creating a issue,  how can i delete the 
issue ?

> hello world
> ---
>
> Key: SPARK-34135
> URL: https://issues.apache.org/jira/browse/SPARK-34135
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.1
> Environment: fsdfdsf
>Reporter: Kevin Pis
>Priority: Major
> Fix For: 2.4.8
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> fsfs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34135) hello world

2021-01-16 Thread Kevin Pis (Jira)
Kevin Pis created SPARK-34135:
-

 Summary: hello world
 Key: SPARK-34135
 URL: https://issues.apache.org/jira/browse/SPARK-34135
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.0.1
 Environment: fsdfdsf
Reporter: Kevin Pis
 Fix For: 2.4.8


fsfs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org