Hi community! I've noticed that there is no possibility in Beam JDBC to use partitioning for reading a very large table with millions of rows in parallel (for example when migrating legacy database data to BigQuery). I have some ideas which are decribed here in more detailes: https://docs.google.com/document/d/1wBzVhQEhTK23ALzTSZ_CVouEOXTm3w2-LjmO3ieUvFc/edit?usp=sharing I would like to start working on the related task I've created https://issues.apache.org/jira/browse/BEAM-12456 If anybody have any concerns or proposals please feel free to leave comments at google doc.
Thank you, Daria
