[ https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16009513#comment-16009513 ]
Lars Hofhansl commented on HBASE-18027: --------------------------------------- I also notice that that readAllEntriesToReplicateOrNextFile calculates the size this way (in 1.3.x): {code} WAL.Entry entry = ...; ... currentSize += entry.getEdit().heapSize(); currentSize += calculateTotalSizeOfStoreFiles(edit); {code} Perhaps that may be the discrepancy...? (and the fact that we check after we added the entry - as you point out) We can do this patch of course. But I do think it'd be simpler and easier to follow/change later if we fix it in the caller and don't introduce another loop inside the sending code. > HBaseInterClusterReplicationEndpoint should respect RPC size limits when > batching edits > --------------------------------------------------------------------------------------- > > Key: HBASE-18027 > URL: https://issues.apache.org/jira/browse/HBASE-18027 > Project: HBase > Issue Type: Bug > Components: Replication > Affects Versions: 2.0.0, 1.4.0, 1.3.1 > Reporter: Andrew Purtell > Assignee: Andrew Purtell > Fix For: 2.0.0, 1.4.0, 1.3.2 > > Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, > HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch > > > In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in > batches. We create N lists. N is the minimum of configured replicator > threads, number of 100-waledit batches, or number of current sinks. Every > pending entry in the replication context is then placed in order by hash of > encoded region name into one of these N lists. Each of the N lists is then > sent all at once in one replication RPC. We do not test if the sum of data in > each N list will exceed RPC size limits. This code presumes each individual > edit is reasonably small. Not checking for aggregate size while assembling > the lists into RPCs is an oversight and can lead to replication failure when > that assumption is violated. > We can fix this by generating as many replication RPC calls as we need to > drain a list, keeping each RPC under limit, instead of assuming the whole > list will fit in one. -- This message was sent by Atlassian JIRA (v6.3.15#6346)