[jira] [Updated] (SPARK-35299) Dataframe overwrite on S3 does not delete old files with S3 object-put to table path

2021-05-03 Thread Yusheng Ding (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusheng Ding updated SPARK-35299:
-
Description: 
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

#Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 

  was:
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

## Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 


> Dataframe overwrite on S3 does not delete old files with S3 object-put to 
> table path
> 
>
> Key: SPARK-35299
> URL: https://issues.apache.org/jira/browse/SPARK-35299
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Yusheng Ding
>Priority: Major
>  Labels: aws-s3, dataframe, hive, spark
>
> To reproduce:
> test_table path: s3a://test_bucket/test_table/
>  
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 1000
> #S3 operation##
> s3 = boto3.client("s3")
>  s3.put_object(
>      Bucket="test_bucket", Body="", Key=f"test_table/"
>  )
> #S3 operation##
> df.write.insertInto(test_table, overwrite=True)
> #Same goes to df.write.save(mode="overwrite", format="parquet", 
> path="s3a://test_bucket/test_table")
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 2000
>  
> Overwrite is not functioning correctly. Old files will not be deleted on S3.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35299) Dataframe overwrite on S3 does not delete old files with S3 object-put to table path

2021-05-03 Thread Yusheng Ding (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusheng Ding updated SPARK-35299:
-
Description: 
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

## Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 

  was:
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

# Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 


> Dataframe overwrite on S3 does not delete old files with S3 object-put to 
> table path
> 
>
> Key: SPARK-35299
> URL: https://issues.apache.org/jira/browse/SPARK-35299
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Yusheng Ding
>Priority: Major
>  Labels: aws-s3, dataframe, hive, spark
>
> To reproduce:
> test_table path: s3a://test_bucket/test_table/
>  
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 1000
> #S3 operation##
> s3 = boto3.client("s3")
>  s3.put_object(
>      Bucket="test_bucket", Body="", Key=f"test_table/"
>  )
> #S3 operation##
> df.write.insertInto(test_table, overwrite=True)
> ## Same goes to df.write.save(mode="overwrite", format="parquet", 
> path="s3a://test_bucket/test_table")
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 2000
>  
> Overwrite is not functioning correctly. Old files will not be deleted on S3.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35299) Dataframe overwrite on S3 does not delete old files with S3 object-put to table path

2021-05-03 Thread Yusheng Ding (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusheng Ding updated SPARK-35299:
-
Description: 
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

## Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 

  was:
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

## Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 


> Dataframe overwrite on S3 does not delete old files with S3 object-put to 
> table path
> 
>
> Key: SPARK-35299
> URL: https://issues.apache.org/jira/browse/SPARK-35299
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Yusheng Ding
>Priority: Major
>  Labels: aws-s3, dataframe, hive, spark
>
> To reproduce:
> test_table path: s3a://test_bucket/test_table/
>  
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 1000
> #S3 operation##
> s3 = boto3.client("s3")
>  s3.put_object(
>      Bucket="test_bucket", Body="", Key=f"test_table/"
>  )
> #S3 operation##
> df.write.insertInto(test_table, overwrite=True)
> ## Same goes to df.write.save(mode="overwrite", format="parquet", 
> path="s3a://test_bucket/test_table")
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 2000
>  
> Overwrite is not functioning correctly. Old files will not be deleted on S3.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35299) Dataframe overwrite on S3 does not delete old files with S3 object-put to table path

2021-05-03 Thread Yusheng Ding (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusheng Ding updated SPARK-35299:
-
Description: 
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
 s3.put_object(
     Bucket="test_bucket", Body="", Key=f"test_table/"
 )

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

# Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 

  was:
To reproduce:

test_table path: s3a://test_bucket/test_table/

 

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 1000

#S3 operation##

s3 = boto3.client("s3")
s3.put_object(
    Bucket="test_bucket", Body="", Key=f"test_table/"
)

#S3 operation##

df.write.insertInto(test_table, overwrite=True)

# Same goes to df.write.save(mode="overwrite", format="parquet", 
path="s3a://test_bucket/test_table")

df = spark_session.sql("SELECT * FROM test_table")

df.count()  # produce row number 2000

 

Overwrite is not functioning correctly. Old files will not be deleted on S3.

 

 

 


> Dataframe overwrite on S3 does not delete old files with S3 object-put to 
> table path
> 
>
> Key: SPARK-35299
> URL: https://issues.apache.org/jira/browse/SPARK-35299
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Yusheng Ding
>Priority: Major
>  Labels: aws-s3, dataframe, hive, spark
>
> To reproduce:
> test_table path: s3a://test_bucket/test_table/
>  
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 1000
> #S3 operation##
> s3 = boto3.client("s3")
>  s3.put_object(
>      Bucket="test_bucket", Body="", Key=f"test_table/"
>  )
> #S3 operation##
> df.write.insertInto(test_table, overwrite=True)
> # Same goes to df.write.save(mode="overwrite", format="parquet", 
> path="s3a://test_bucket/test_table")
> df = spark_session.sql("SELECT * FROM test_table")
> df.count()  # produce row number 2000
>  
> Overwrite is not functioning correctly. Old files will not be deleted on S3.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org