[ https://issues.apache.org/jira/browse/SPARK-28025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919471#comment-16919471 ]
Stavros Kontopoulos edited comment on SPARK-28025 at 8/30/19 11:54 AM: ----------------------------------------------------------------------- @[~dongjoon] [~zsxwing] this needs to be re-opened. When using the workaround we recently hit this issue: [https://github.com/broadinstitute/gatk/issues/1389] which can be fixed easily with a derived class like in this PR: [https://github.com/broadinstitute/gatk/pull/1421/files] but this is a bit of inconvenient. However, I believe as well that this should be fixed in Spark (less surprises) otherwise we need to document it as [~kabhwan] said above. was (Author: skonto): @[~dongjoon] [~zsxwing] this needs to be re-opened. When using the workaround we recently hit this issue: [https://github.com/broadinstitute/gatk/issues/1389] which can be fixed easily with a derived class like in this PR: [https://github.com/broadinstitute/gatk/pull/1421/files] However, I believe as well that this should be fixed in Spark (less surprises) otherwise we need to document it as [~kabhwan] said above. > HDFSBackedStateStoreProvider should not leak .crc files > -------------------------------------------------------- > > Key: SPARK-28025 > URL: https://issues.apache.org/jira/browse/SPARK-28025 > Project: Spark > Issue Type: Bug > Components: Structured Streaming > Affects Versions: 2.4.3 > Environment: Spark 2.4.3 > Kubernetes 1.11(?) (OpenShift) > StateStore storage on a mounted PVC. Viewed as a local filesystem by the > `FileContextBasedCheckpointFileManager` : > {noformat} > scala> glusterfm.isLocal > res17: Boolean = true{noformat} > Reporter: Gerard Maas > Assignee: Jungtaek Lim > Priority: Major > Fix For: 2.4.4, 3.0.0 > > > The HDFSBackedStateStoreProvider when using the default CheckpointFileManager > is leaving '.crc' files behind. There's a .crc file created for each > `atomicFile` operation of the CheckpointFileManager. > Over time, the number of files becomes very large. It makes the state store > file system constantly increase in size and, in our case, deteriorates the > file system performance. > Here's a sample of one of our spark storage volumes after 2 days of execution > (4 stateful streaming jobs, each on a different sub-dir): > # > {noformat} > Total files in PVC (used for checkpoints and state store) > $find . | wc -l > 431796 > # .crc files > $find . -name "*.crc" | wc -l > 418053{noformat} > With each .crc file taking one storage block, the used storage runs into the > GBs of data. > These jobs are running on Kubernetes. Our shared storage provider, GlusterFS, > shows serious performance deterioration with this large number of files: > {noformat} > DEBUG HDFSBackedStateStoreProvider: fetchFiles() took 29164ms{noformat} > -- This message was sent by Atlassian Jira (v8.3.2#803003) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org