[jira] [Resolved] (HADOOP-16599) Allow a SignerInitializer to be specified along with a Custom Signer

2019-10-02 Thread Siddharth Seth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved HADOOP-16599.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> Allow a SignerInitializer to be specified along with a Custom Signer
> 
>
> Key: HADOOP-16599
> URL: https://issues.apache.org/jira/browse/HADOOP-16599
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Fix For: 3.3.0
>
>
> HADOOP-16445 added support for custom signers. This is a follow up to allow 
> for an Initializer to be specified along with the Custom Signer, for any 
> initialization etc that is required by the custom signer specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-02 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16626:
---

 Summary: S3A ITestRestrictedReadAccess fails
 Key: HADOOP-16626
 URL: https://issues.apache.org/jira/browse/HADOOP-16626
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/s3
Reporter: Siddharth Seth


Just tried running the S3A test suite. Consistently seeing the following.
Command used 
{code}
mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
-Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
{code}

cc [~ste...@apache.org]

{code}
---
Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
---
Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)  
Time elapsed: 2.841 s  <<< ERROR!
java.nio.file.AccessDeniedException: 
test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
test/testNoReadAccess-raw/noReadDir/emptyDir/: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FE8B4D6F25648BCD; 
S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), 
S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
 Forbidden
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
(Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
FE8B4D6F25648BCD; S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), 
S3 Extended Request ID: 
hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 

[jira] [Created] (HADOOP-16599) Allow a SignerInitializer to be specified along with a Custom Signer

2019-09-24 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16599:
---

 Summary: Allow a SignerInitializer to be specified along with a 
Custom Signer
 Key: HADOOP-16599
 URL: https://issues.apache.org/jira/browse/HADOOP-16599
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
 Environment: A
Reporter: Siddharth Seth
Assignee: Siddharth Seth


HADOOP-16445 added support for custom signers. This is a follow up to allow for 
an Initializer to be specified along with the Custom Signer, for any 
initialization etc that is required by the custom signer specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16591) S3A ITest*MRjob failures

2019-09-20 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16591:
---

 Summary: S3A ITest*MRjob failures
 Key: HADOOP-16591
 URL: https://issues.apache.org/jira/browse/HADOOP-16591
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


ITest*MRJob fail with a FileNotFoundException
{code}
[ERROR]   
ITestMagicCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
 » FileNotFound
[ERROR]   
ITestDirectoryCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
 » FileNotFound
[ERROR]   
ITestPartitionCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
 » FileNotFound
[ERROR]   
ITestStagingCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
 » FileNotFound
{code}
Details here: 
https://issues.apache.org/jira/browse/HADOOP-16207?focusedCommentId=16933718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16933718

Creating a separate jira since HADOOP-16207 already has a patch which is trying 
to parallelize the test runs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16586) ITestS3GuardFsck fails when nun using a local metastore

2019-09-18 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16586:
---

 Summary: ITestS3GuardFsck fails when nun using a local metastore
 Key: HADOOP-16586
 URL: https://issues.apache.org/jira/browse/HADOOP-16586
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Siddharth Seth


Most of these tests fail if running against a local metastore with a 
ClassCastException.

Not sure if these tests are intended to work with dynamo only. The fix (either 
ignore in case of other metastores or fix the test) would depend on the 
original intent.

{code}
---
Test set: org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
---
Tests run: 12, Failures: 0, Errors: 11, Skipped: 1, Time elapsed: 34.653 s <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
testIDetectParentTombstoned(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 3.237 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentTombstoned(ITestS3GuardFsck.java:190)

testIDetectDirInS3FileInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 1.827 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectDirInS3FileInMs(ITestS3GuardFsck.java:214)

testIDetectLengthMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.819 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectLengthMismatch(ITestS3GuardFsck.java:311)

testIEtagMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time 
elapsed: 2.832 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIEtagMismatch(ITestS3GuardFsck.java:373)

testIDetectFileInS3DirInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.752 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectFileInS3DirInMs(ITestS3GuardFsck.java:238)

testIDetectModTimeMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 4.103 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectModTimeMismatch(ITestS3GuardFsck.java:346)

testIDetectNoMetadataEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 3.017 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoMetadataEntry(ITestS3GuardFsck.java:113)

testIDetectNoParentEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.821 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoParentEntry(ITestS3GuardFsck.java:136)

testINoEtag(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time elapsed: 
4.493 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testINoEtag(ITestS3GuardFsck.java:403)

testIDetectParentIsAFile(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.782 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentIsAFile(ITestS3GuardFsck.java:163)

testTombstonedInMsNotDeletedInS3(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)
  Time elapsed: 3.008 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore 

[jira] [Created] (HADOOP-16584) S3A Test failures when S3Guard is not enabled

2019-09-17 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16584:
---

 Summary: S3A Test failures when S3Guard is not enabled
 Key: HADOOP-16584
 URL: https://issues.apache.org/jira/browse/HADOOP-16584
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
 Environment: S
Reporter: Siddharth Seth


There's several S3 test failures when S3Guard is not enabled.
All of these tests pass once the tests are configured to use S3Guard.

{code}
ITestS3GuardTtl#testListingFilteredExpiredItems
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 4, Time elapsed: 
102.988 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] 
testListingFilteredExpiredItems[0](org.apache.hadoop.fs.s3a.ITestS3GuardTtl)  
Time elapsed: 14.675 s  <<< FAILURE!
java.lang.AssertionError:
[Metastrore directory listing of 
s3a://sseth-dev-in/fork-0002/test/testListingFilteredExpiredItems]
Expecting actual not to be null
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.getDirListingMetadata(ITestS3GuardTtl.java:367)
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.testListingFilteredExpiredItems(ITestS3GuardTtl.java:335)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.lang.Thread.run(Thread.java:748)

[ERROR] 
testListingFilteredExpiredItems[1](org.apache.hadoop.fs.s3a.ITestS3GuardTtl)  
Time elapsed: 44.463 s  <<< FAILURE!
java.lang.AssertionError:
[Metastrore directory listing of 
s3a://sseth-dev-in/fork-0002/test/testListingFilteredExpiredItems]
Expecting actual not to be null
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.getDirListingMetadata(ITestS3GuardTtl.java:367)
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.testListingFilteredExpiredItems(ITestS3GuardTtl.java:335)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.lang.Thread.run(Thread.java:748)
{code}

Related to no metastore being used. Test failure happens in teardown with a 
NPE, since the setup did not complete. This one is likely a simple fix with 
some null checks in the teardown method.
 ITestAuthoritativePath (6 failures all with the same pattern)
{code}
  [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 8.142 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestAuthoritativePath
[ERROR] testPrefixVsDirectory(org.apache.hadoop.fs.s3a.ITestAuthoritativePath)  
Time elapsed: 6.821 s  <<< ERROR!
org.junit.AssumptionViolatedException: FS needs to have a metadatastore.
  at org.junit.Assume.assumeTrue(Assume.java:59)
  at 
org.apache.hadoop.fs.s3a.ITestAuthoritativePath.setup(ITestAuthoritativePath.java:63)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 

[jira] [Created] (HADOOP-16583) Minor fixes to S3 testing instructions

2019-09-17 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16583:
---

 Summary: Minor fixes to S3 testing instructions
 Key: HADOOP-16583
 URL: https://issues.apache.org/jira/browse/HADOOP-16583
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


testing.md has some instructions which don't work any longer, and needs an 
update.

Specifically - how to enable s3guard and switch between dynamodb and localdb as 
the store.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16538) S3AFilesystem trash handling should respect the current UGI

2019-08-28 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16538:
---

 Summary: S3AFilesystem trash handling should respect the current 
UGI
 Key: HADOOP-16538
 URL: https://issues.apache.org/jira/browse/HADOOP-16538
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Siddharth Seth


S3 move to trash currently relies upon System.getProperty(user.name). Instead, 
it should be relying on the current UGI to figure out the username.

getHomeDirectory needs to be overridden to use UGI instead of System.getProperty



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16449) Allow an empty credential provider chain, separate chains for S3 and DDB

2019-07-26 Thread Siddharth Seth (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved HADOOP-16449.
-
Resolution: Won't Fix

> Allow an empty credential provider chain, separate chains for S3 and DDB
> 
>
> Key: HADOOP-16449
> URL: https://issues.apache.org/jira/browse/HADOOP-16449
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Attachments: HADOOP-16449.01.patch
>
>
> Currently, credentials cannot be empty (falls back to using the default 
> chain). Credentials for S3 and DDB are always the same.
> In some cases it can be useful to use a different credential chain for S3 and 
> DDB, as well as allow for an empty credential chain.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16449) Allow an empty credential provider chain, separate chains for S3 and DDB

2019-07-23 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-16449:
---

 Summary: Allow an empty credential provider chain, separate chains 
for S3 and DDB
 Key: HADOOP-16449
 URL: https://issues.apache.org/jira/browse/HADOOP-16449
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


Currently, credentials cannot be empty (falls back to using the default chain). 
Credentials for S3 and DDB are always the same.

In some cases it can be useful to use a different credential chain for S3 and 
DDB, as well as allow for an empty credential chain.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16445) Allow separate custom signing algorithms for S3 and DDB

2019-07-22 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-16445:
---

 Summary: Allow separate custom signing algorithms for S3 and DDB
 Key: HADOOP-16445
 URL: https://issues.apache.org/jira/browse/HADOOP-16445
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


fs.s3a.signing-algorithm allows overriding the signer. This applies to both the 
S3 and DDB clients. Need to be able to specify separate signing algorithm 
overrides for S3 and DDB.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13335) Add an option to suppress the 'use yarn jar' warning or remove it

2016-06-30 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-13335:
---

 Summary: Add an option to suppress the 'use yarn jar' warning or 
remove it
 Key: HADOOP-13335
 URL: https://issues.apache.org/jira/browse/HADOOP-13335
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Siddharth Seth


https://issues.apache.org/jira/browse/HADOOP-11257 added a 'deprecation' 
warning for 'hadoop jar'.

hadoop jar is used for a lot more that starting jobs. As an example - hive uses 
it to start all it's services (HiveServer2, the hive client, beeline etc).
Using 'yarn jar' for to start these services / tools doesn't make a lot of 
sense - there's no relation to yarn other than requiring the classpath to 
include yarn libraries.

I'd propose reverting the changes where this message is printed if YARN 
variables are set (leave it in the help message), or adding a mechanism which 
would allow users to suppress this WARNING.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-9362) Consider using error codes and enums for errors over IPC

2013-03-05 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-9362:
--

 Summary: Consider using error codes and enums for errors over IPC
 Key: HADOOP-9362
 URL: https://issues.apache.org/jira/browse/HADOOP-9362
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Siddharth Seth


Follow up jira from HADOOP-9343, which has a little more detail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9343) Allow additional exceptions through the RPC layer

2013-02-27 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-9343:
--

 Summary: Allow additional exceptions through the RPC layer
 Key: HADOOP-9343
 URL: https://issues.apache.org/jira/browse/HADOOP-9343
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth


The RPC layer currently only allows IOException, RuntimeException, 
InterruptedException and their derivatives - which limits exceptions declared 
by protocols.
Other exceptions end up at the client as an UndeclaredThrowableException 
wrapped in RemoteException.
Additional exception types should be allowed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8743) Change version in branch-2 to 2.2.0-SNAPSHOT, Update CHANGES.txt

2012-08-28 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-8743:
--

 Summary: Change version in branch-2 to 2.2.0-SNAPSHOT, Update 
CHANGES.txt
 Key: HADOOP-8743
 URL: https://issues.apache.org/jira/browse/HADOOP-8743
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Siddharth Seth


The version in branch-2 is currently set to 2.0.1-SNAPSHOT, which has already 
been released.
This is to update the version to 2.2.0-SNAPSHOT to match the target versions in 
jira.

Also, CHANGES.txt is in bit of a mess. common, hdfs are missing sections for 
2.0.1, as well as 2.1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7580) Add a version of getLocalPathForWrite to LocalDirAllocator which doesn't create dirs

2011-08-24 Thread Siddharth Seth (JIRA)
Add a version of getLocalPathForWrite to LocalDirAllocator which doesn't create 
dirs


 Key: HADOOP-7580
 URL: https://issues.apache.org/jira/browse/HADOOP-7580
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Siddharth Seth
Assignee: Siddharth Seth


Required in MR where directories are created by ContainerExecutor (mrv2) / 
TaskController (0.20) as a specific user.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7296) The FsPermission(FsPermission) constructor does not use the sticky bit

2011-05-16 Thread Siddharth Seth (JIRA)
The FsPermission(FsPermission) constructor does not use the sticky bit
--

 Key: HADOOP-7296
 URL: https://issues.apache.org/jira/browse/HADOOP-7296
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Siddharth Seth
Priority: Minor


The FsPermission(FsPermission) constructor copies u, g, o from the supplied 
FsPermission object but ignores the sticky bit.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira