[ https://issues.apache.org/jira/browse/HADOOP-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15866260#comment-15866260 ]
Sean Mackrory commented on HADOOP-13904: ---------------------------------------- {quote}This patch essentially keeps existing exception behavior but just slows down batch work resubmittal. So I think it is an improvement, but we may have to add a higher-level retry loop for the ProvisionedThroughputExceededException case. Why they don't just return all items as unprocessed is beyond me.{quote} I'm of the opinion that we should be catching that one. It seems required to reasonably and correctly handle the behavior as documented, even though we haven't seen that specific edge case. Everything else sounds good to me... > DynamoDBMetadataStore to handle DDB throttling failures through retry policy > ---------------------------------------------------------------------------- > > Key: HADOOP-13904 > URL: https://issues.apache.org/jira/browse/HADOOP-13904 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: HADOOP-13345 > Reporter: Steve Loughran > Assignee: Aaron Fabbri > Attachments: HADOOP-13904-HADOOP-13345.001.patch, > HADOOP-13904-HADOOP-13345.002.patch > > > When you overload DDB, you get error messages warning of throttling, [as > documented by > AWS|http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes] > Reduce load on DDB by doing a table lookup before the create, then, in table > create/delete operations and in get/put actions, recognise the error codes > and retry using an appropriate retry policy (exponential backoff + ultimate > failure) -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org