[ https://issues.apache.org/jira/browse/HADOOP-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15830223#comment-15830223 ]
Steve Loughran commented on HADOOP-13985: ----------------------------------------- also this patch is going to include something to stop the error text getting lost in AbstractFileSystem instantiation, on the basis that this is utterly useless, and pretty much every AFS subclass does declare IOEs as thrown from their constructor. The public APIs in hadoop-common don't declare that IOEs are thrown, so even then unwound exception will have to be wrapped in an RTE —but at least now the text is retained. Before {code} testStatisticsThreadLocalDataCleanUp(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics) Time elapsed: 0.44 sec <<< ERROR! java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.verifyVersionCompatibility(DynamoDBMetadataStore.java:618) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:583) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:246) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:92) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:258) at org.apache.hadoop.fs.DelegateToFileSystem.<init>(DelegateToFileSystem.java:52) at org.apache.hadoop.fs.s3a.S3A.<init>(S3A.java:40) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:134) at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857) at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:445) at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileContext(S3ATestUtils.java:154) at org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics.setUp(ITestS3AFileContextStatistics.java:34) {code} after {code} testStatisticsThreadLocalDataCleanUp(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics) Time elapsed: 0.407 sec <<< ERROR! java.lang.RuntimeException: java.io.IOException: S3Guard table lacks version marker. Table: hwdev-steve-frankfurt-new at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.verifyVersionCompatibility(DynamoDBMetadataStore.java:618) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:583) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:246) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:92) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:258) at org.apache.hadoop.fs.DelegateToFileSystem.<init>(DelegateToFileSystem.java:52) at org.apache.hadoop.fs.s3a.S3A.<init>(S3A.java:40) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:135) at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:173) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:258) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:332) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:329) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857) at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:329) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:454) at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileContext(S3ATestUtils.java:154) at org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics.setUp(ITestS3AFileContextStatistics.java:34) {code} > s3guard: add a version marker to every table > -------------------------------------------- > > Key: HADOOP-13985 > URL: https://issues.apache.org/jira/browse/HADOOP-13985 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: HADOOP-13345 > Reporter: Steve Loughran > Assignee: Steve Loughran > Attachments: HADOOP-13985-HADOOP-13345-001.patch > > > This is something else we need before any preview: a way to identify a table > version, so that if future versions change the table structure: > * older clients can recognise that it's a newer format, and fail > * the future version can identify that it's an older format, and fail until > some fsck-upgrade operation has taken place > I think something like a row on a path which is impossible in a real > filesystem, such as "../VERSION" would allow a version marker to go in; the > length field could be abused for the version number. > This field would be something that'd be checked in init(), so be the simple > test for table existence we need for faster init -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org