[ 
https://issues.apache.org/jira/browse/BEAM-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068204#comment-17068204
 ] 

Chamikara Madhusanka Jayalath commented on BEAM-9620:
-----------------------------------------------------

Though it might make the source not work in the way it's implemented today. We 
rely on estimate_size() to perform initial splitting at workers which has to 
work for the source to work. If we time limit, we have to make sure that 
splitting/reading is not affected.

> textio (and fileio in general) takes too long to estimate sizes of large globs
> ------------------------------------------------------------------------------
>
>                 Key: BEAM-9620
>                 URL: https://issues.apache.org/jira/browse/BEAM-9620
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-py-core
>            Reporter: Chamikara Madhusanka Jayalath
>            Priority: Major
>
> As a workaround we could introduce a way to not perform size estimation when 
> reading large globs. For example Java SDK has withHintMatchesManyFiles() 
> option.
>  
> [https://github.com/apache/beam/blob/850e8469de798d45ec535fe90cb2dc5dbda4974a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/TextIO.java#L371]
>  
> Additionally, seems like we are repeating the size estimation where the same 
> PCollection read from a file-based source is applied to multiple PTransforms.
>  
> See following for more details.
> [https://stackoverflow.com/questions/60874942/avoid-recomputing-size-of-all-cloud-storage-files-in-gcsio-beam-python-sdk]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to