[ https://issues.apache.org/jira/browse/SPARK-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787511#comment-16787511 ]
Jungtaek Lim commented on SPARK-24295: -------------------------------------- [~alfredo-gimenez-bv] I meant removing "deleted output files by other process" instead of "old metadata files". See here: [https://github.com/apache/spark/blob/d8f77e11a42bf664a02124a8b6830797979550b4/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CompactibleFileStreamLog.scala#L109-L112] and here: [https://github.com/apache/spark/blob/d8f77e11a42bf664a02124a8b6830797979550b4/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLog.scala#L100-L108] the issue is, while the code exists to get rid of obsolete output files, setting FileStreamSinkLog.DELETE_ACTION never happens. > Purge Structured streaming FileStreamSinkLog metadata compact file data. > ------------------------------------------------------------------------ > > Key: SPARK-24295 > URL: https://issues.apache.org/jira/browse/SPARK-24295 > Project: Spark > Issue Type: Bug > Components: Structured Streaming > Affects Versions: 2.3.0 > Reporter: Iqbal Singh > Priority: Major > Attachments: spark_metadatalog_compaction_perfbug_repro.tar.gz > > > FileStreamSinkLog metadata logs are concatenated to a single compact file > after defined compact interval. > For long running jobs, compact file size can grow up to 10's of GB's, Causing > slowness while reading the data from FileStreamSinkLog dir as spark is > defaulting to the "__spark__metadata" dir for the read. > We need a functionality to purge the compact file size. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org