waterxjw opened a new issue, #33039: URL: https://github.com/apache/beam/issues/33039
### What happened? I want to export the entire table of Bigtable via BigtableIO.Read. But there are some rows that exceeds the limit of 256MB, which leads to failure. It seems that Bigtable server refused to return the row that exceeds 256MB. I found some infomation in [Bigtable Documents](https://cloud.google.com/bigtable/docs/reads#large-rows), which suggests me to use paginate my request and use a [cells per row limit filter](https://cloud.google.com/bigtable/docs/using-filters#cells-per-row-limit) and a [cells per row offset filter](https://cloud.google.com/bigtable/docs/using-filters#cells-per-row-offset). But I don't know how to apply this method with BigtableIO.Read, considering I want to export all the data of table. I don't know how to implement dynamic paginate by cell in one pipeline. I would like to know if BigtableIO.Read currently has the capability to meet the requirements of my scenario. If it cannot, are there any alternative solutions that can help me elegantly export all the data? ### Issue Priority Priority: 2 (default / most bugs should be filed as P2) ### Issue Components - [ ] Component: Python SDK - [X] Component: Java SDK - [ ] Component: Go SDK - [ ] Component: Typescript SDK - [ ] Component: IO connector - [ ] Component: Beam YAML - [ ] Component: Beam examples - [ ] Component: Beam playground - [ ] Component: Beam katas - [ ] Component: Website - [ ] Component: Infrastructure - [ ] Component: Spark Runner - [ ] Component: Flink Runner - [ ] Component: Samza Runner - [ ] Component: Twister2 Runner - [ ] Component: Hazelcast Jet Runner - [ ] Component: Google Cloud Dataflow Runner -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
