[ 
https://issues.apache.org/jira/browse/HADOOP-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16541627#comment-16541627
 ] 

Sean Mackrory commented on HADOOP-15349:
----------------------------------------

+1 to the change. It would be nice if we could confirm that the I/O thresholds 
really are the reason for unprocessed items. I don't know what else would cause 
that - JavaDocs don't mention anything. We can retrieve the capacity used in a 
given attempt from the BatchWriteItemResult, but getting the capacity 
configured doesn't seem to be exposed in the API (and we can't assume the one 
configured for new tables in Hadoop is the same as the current table).

It might be nice to do a bit of testing with some large batch sizes and see if 
we can at least document a recommended minimum that seems to reliably not 
exhaust the exponential back-off's buffer time. Can you file a follow-up JIRA 
for that and I'll commit this patch?

> S3Guard DDB retryBackoff to be more informative on limits exceeded
> ------------------------------------------------------------------
>
>                 Key: HADOOP-15349
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15349
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Gabor Bota
>            Priority: Major
>         Attachments: HADOOP-15349.001.patch, failure.log
>
>
> When S3Guard can't update the DB and so throws an IOE after the retry limit 
> is exceeded, it's not at all informative. Improve logging & exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to