[ https://issues.apache.org/jira/browse/BEAM-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068187#comment-17068187 ]
Udi Meiri commented on BEAM-9620: --------------------------------- Since this is an estimation, perhaps there should be limits on how much it samples or a maximum amount of time it can spend sampling (overall). > textio (and fileio in general) takes too long to estimate sizes of large globs > ------------------------------------------------------------------------------ > > Key: BEAM-9620 > URL: https://issues.apache.org/jira/browse/BEAM-9620 > Project: Beam > Issue Type: Bug > Components: sdk-py-core > Reporter: Chamikara Madhusanka Jayalath > Priority: Major > > As a workaround we could introduce a way to not perform size estimation when > reading large globs. For example Java SDK has withHintMatchesManyFiles() > option. > > [https://github.com/apache/beam/blob/850e8469de798d45ec535fe90cb2dc5dbda4974a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/TextIO.java#L371] > > Additionally, seems like we are repeating the size estimation where the same > PCollection read from a file-based source is applied to multiple PTransforms. > > See following for more details. > [https://stackoverflow.com/questions/60874942/avoid-recomputing-size-of-all-cloud-storage-files-in-gcsio-beam-python-sdk] -- This message was sent by Atlassian Jira (v8.3.4#803005)