[ https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Nick Dimiduk updated HBASE-9931: -------------------------------- Resolution: Fixed Release Note: Patch applied to trunk, 0.96, 0.94 branches. Thanks for the report, Dave; the reviews Andrew and Lars. Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Optional setBatch for CopyTable to copy large rows in batches > ------------------------------------------------------------- > > Key: HBASE-9931 > URL: https://issues.apache.org/jira/browse/HBASE-9931 > Project: HBase > Issue Type: Improvement > Components: mapreduce > Reporter: Dave Latham > Assignee: Nick Dimiduk > Fix For: 0.98.0, 0.96.1, 0.94.15 > > Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch > > > We've had CopyTable jobs fail because a small number of rows are wide enough > to not fit into memory. If we could specify the batch size for CopyTable > scans that shoud be able to break those large rows up into multiple > iterations to save the heap. -- This message was sent by Atlassian JIRA (v6.1#6144)