[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries

2019-06-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874419#comment-16874419
 ] 

Steve Loughran commented on HADOOP-15834:
-

With DDB on demand, throttling effectively goes away. So this is less important

> Improve throttling on S3Guard DDB batch retries
> ---
>
> Key: HADOOP-15834
> URL: https://issues.apache.org/jira/browse/HADOOP-15834
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> the batch throttling may fail too fast 
> if there's batch update of 25 writes but the default retry count is nine 
> attempts, only nine writes of the batch may be attempted...even if each 
> attempt is actually successfully writing data.
> In contrast, a single write of a piece of data gets the same no. of attempts, 
> so 25 individual writes can handle a lot more throttling than a bulk write.
> Proposed: retry logic to be more forgiving of batch writes, such as not 
> consider a batch call where at least one data item was written to count as a 
> failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries

2018-10-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644107#comment-16644107
 ] 

Steve Loughran commented on HADOOP-15834:
-

Moved the DDB active stack to a self-contained fix HADOOP-15837

> Improve throttling on S3Guard DDB batch retries
> ---
>
> Key: HADOOP-15834
> URL: https://issues.apache.org/jira/browse/HADOOP-15834
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> the batch throttling may fail too fast 
> if there's batch update of 25 writes but the default retry count is nine 
> attempts, only nine writes of the batch may be attempted...even if each 
> attempt is actually successfully writing data.
> In contrast, a single write of a piece of data gets the same no. of attempts, 
> so 25 individual writes can handle a lot more throttling than a bulk write.
> Proposed: retry logic to be more forgiving of batch writes, such as not 
> consider a batch call where at least one data item was written to count as a 
> failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries

2018-10-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643923#comment-16643923
 ] 

Steve Loughran commented on HADOOP-15834:
-

assuming that exists but inactive == capacity reallocation, we should catch & 
log and use the batch retry policy. Key point: we can/should wait more than 
just the SDK.

> Improve throttling on S3Guard DDB batch retries
> ---
>
> Key: HADOOP-15834
> URL: https://issues.apache.org/jira/browse/HADOOP-15834
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> the batch throttling may fail too fast 
> if there's batch update of 25 writes but the default retry count is nine 
> attempts, only nine writes of the batch may be attempted...even if each 
> attempt is actually successfully writing data.
> In contrast, a single write of a piece of data gets the same no. of attempts, 
> so 25 individual writes can handle a lot more throttling than a bulk write.
> Proposed: retry logic to be more forgiving of batch writes, such as not 
> consider a batch call where at least one data item was written to count as a 
> failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries

2018-10-09 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643618#comment-16643618
 ] 

Steve Loughran commented on HADOOP-15834:
-

Also, switching to dynamic capacity seems to trigger periods when the DDB table 
isn't active any more
{code}
ERROR] Tests run: 68, Failures: 0, Errors: 1, Skipped: 4, Time elapsed: 429.837 
s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
[ERROR] 
testCreateFlagAppendNonExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations)
  Time elapsed: 127.843 s  <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Failed to instantiate metadata 
store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in 
fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table 
hwdev-steve-ireland-new did not transition into ACTIVE state.
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:464)
at 
org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileContext(S3ATestUtils.java:218)
at 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations.setUp(ITestS3AFileContextMainOperations.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
Caused by: java.io.IOException: Failed to instantiate metadata store 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in 
fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table 
hwdev-steve-ireland-new did not transition into ACTIVE state.
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:114)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:378)
at 
org.apache.hadoop.fs.DelegateToFileSystem.(DelegateToFileSystem.java:52)
at org.apache.hadoop.fs.s3a.S3A.(S3A.java:40)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:135)
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:173)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:258)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:336)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333)
at java.security.AccessController.doPrivileged(Native Method)
at