[jira] [Updated] (HADOOP-13890) Maintain HTTP/host as SPNEGO SPN support and fix KerberosName parsing

2019-10-21 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13890:
-
Description: 
*strong text*HADOOP-13565 introduced an incompatible check that disallowed 
principal like HTTP/host from being used as SPNEGO SPN. 
This breaks the following test in trunk: TestWebDelegationToken, TestKMS , 
TestTrashWithSecureEncryptionZones and TestSecureEncryptionZoneWithKMS because 
they used HTTP/localhost as SPNEGO SPN assuming the default realm. This ticket 
is opened to bring back the support of HTTP/host as valid SPNEGO SPN. 

KerberosName parsing bug was discovered, fixed and included as a necessary part 
of this ticket along with additional unit test to cover parsing different form 
of principals. 

 *Jenkins URL* 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/

  was:
HADOOP-13565 introduced an incompatible check that disallowed principal like 
HTTP/host from being used as SPNEGO SPN. 
This breaks the following test in trunk: TestWebDelegationToken, TestKMS , 
TestTrashWithSecureEncryptionZones and TestSecureEncryptionZoneWithKMS because 
they used HTTP/localhost as SPNEGO SPN assuming the default realm. This ticket 
is opened to bring back the support of HTTP/host as valid SPNEGO SPN. 

KerberosName parsing bug was discovered, fixed and included as a necessary part 
of this ticket along with additional unit test to cover parsing different form 
of principals. 

 *Jenkins URL* 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/


> Maintain HTTP/host as SPNEGO SPN support and fix KerberosName parsing 
> --
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch, HADOOP-13890.04.patch, 
> HADOOP-13890.05.patch, test-failure.txt, test_failure_1.txt
>
>
> *strong text*HADOOP-13565 introduced an incompatible check that disallowed 
> principal like HTTP/host from being used as SPNEGO SPN. 
> This breaks the following test in trunk: TestWebDelegationToken, TestKMS , 
> TestTrashWithSecureEncryptionZones and TestSecureEncryptionZoneWithKMS 
> because they used HTTP/localhost as SPNEGO SPN assuming the default realm. 
> This ticket is opened to bring back the support of HTTP/host as valid SPNEGO 
> SPN. 
> KerberosName parsing bug was discovered, fixed and included as a necessary 
> part of this ticket along with additional unit test to cover parsing 
> different form of principals. 
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs,make it configurable

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-16541:
-
Summary: RM start fail due to zkCuratorManager connectionTimeoutMs,make it 
configurable  (was: RM start fail due to zkCuratorManager connectionTimeoutMs )

> RM start fail due to zkCuratorManager connectionTimeoutMs,make it configurable
> --
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HADOOP-16541_1.patch
>
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-16541:
-
Status: Patch Available  (was: Open)

> RM start fail due to zkCuratorManager connectionTimeoutMs 
> --
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HADOOP-16541_1.patch
>
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16541 started by Shen Yinjie.

> RM start fail due to zkCuratorManager connectionTimeoutMs 
> --
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HADOOP-16541_1.patch
>
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-16541:
-
Attachment: HADOOP-16541_1.patch

> RM start fail due to zkCuratorManager connectionTimeoutMs 
> --
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HADOOP-16541_1.patch
>
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16541 stopped by Shen Yinjie.

> RM start fail due to zkCuratorManager connectionTimeoutMs 
> --
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HADOOP-16541_1.patch
>
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-16541:
-
Summary: RM start fail due to zkCuratorManager connectionTimeoutMs   (was: 
Make zkCuratorManager connectionTimeoutMs configurabale)

> RM start fail due to zkCuratorManager connectionTimeoutMs 
> --
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16541) Make zkCuratorManager connectionTimeoutMs configurabale

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-16541:
-
Description: 
Namenode and resourcemanager do the leader election via curator. Currently 
curator's session timeout is defined in CommonConfigurationKeys, but connection 
timeout couldn't be changed  (just use curator's hard-coded timeout ). 
In some scearios, RM  start fail due to connection to zk timeout. We propose to 
add a hadoop configuration for curator's  zk connection timeout, so that we can 
 deal with this situation better.

  was:
Namenode and resourcemanager do the leader election via curator. Currently 
curator's session timeout is defined in CommonConfigurationKeys, but connection 
timeout couldn't be changed in hadoop (just use curator's default timeout 
hard-coded). 
In some scearios, RM can't start since connect to zk timeout. We propose to add 
a hadoop configuration for curator's  zk connection timeout, so that we can  
deal with this situation better.


> Make zkCuratorManager connectionTimeoutMs configurabale
> ---
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed  (just use curator's hard-coded 
> timeout ). 
> In some scearios, RM  start fail due to connection to zk timeout. We propose 
> to add a hadoop configuration for curator's  zk connection timeout, so that 
> we can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16541) Make zkCuratorManager connectionTimeoutMs configurabale

2019-09-02 Thread Shen Yinjie (Jira)
Shen Yinjie created HADOOP-16541:


 Summary: Make zkCuratorManager connectionTimeoutMs configurabale
 Key: HADOOP-16541
 URL: https://issues.apache.org/jira/browse/HADOOP-16541
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Shen Yinjie


Namenode and resourcemanager do the leader election via curator. Currently 
curator's session timeout is defined in CommonConfigurationKeys, but connection 
timeout couldn't be changed in hadoop (just use curator's default timeout 
hard-coded). 
In some scearios, RM can't start since connect to zk timeout. We propose to add 
a hadoop configuration for curator's  zk connection timeout, so that we can  
deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16541) Make zkCuratorManager connectionTimeoutMs configurabale

2019-09-02 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie reassigned HADOOP-16541:


Assignee: Shen Yinjie

> Make zkCuratorManager connectionTimeoutMs configurabale
> ---
>
> Key: HADOOP-16541
> URL: https://issues.apache.org/jira/browse/HADOOP-16541
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
>
> Namenode and resourcemanager do the leader election via curator. Currently 
> curator's session timeout is defined in CommonConfigurationKeys, but 
> connection timeout couldn't be changed in hadoop (just use curator's default 
> timeout hard-coded). 
> In some scearios, RM can't start since connect to zk timeout. We propose to 
> add a hadoop configuration for curator's  zk connection timeout, so that we 
> can  deal with this situation better.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15976) NameNode Performance degradation When Single LdapServer become a bottleneck in Ldap-based mapping module

2019-06-18 Thread Shen Yinjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16867191#comment-16867191
 ] 

Shen Yinjie commented on HADOOP-15976:
--

[~jojochuang] ,[~fengyongshe] can not spare time currently and handed over this 
issue to me offline. I will create  a pr soon.

> NameNode Performance degradation When Single LdapServer become a  bottleneck 
> in Ldap-based mapping module 
> --
>
> Key: HADOOP-15976
> URL: https://issues.apache.org/jira/browse/HADOOP-15976
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.1
>Reporter: fengyongshe
>Assignee: fengyongshe
>Priority: Major
> Attachments: HADOOP-15976.patch, image003(12-05-1(12-05-10-36-26).jpg
>
>
> 2000+ nodes cluster, We use OpenLdap to manager users and groups . when 
> LdapGroupsMapping used , Group look-up cause segment fault include NameNode 
> Performance degradation & name node crashes . 
> WARN security.Groups: Potential performance problem:
>  getGroups(user=) took 46817 milliseconds.
>  INFO namenode.FSNamesysatem(FSNamesystemLoclk.java:writeUnlock(252))- 
> FSNameSystem write lock held for 46817 ms via java.lang.thread.getStackTrace
> We Found the Ldap Server become the bottleneck for NN operations, Single Ldap 
> Server  only support hundred request per seconds
> ps. The Server was running nslcd 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-08-18 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427428#comment-15427428
 ] 

Shen Yinjie commented on HADOOP-13405:
--

Thanks,[~ste...@apache.org] and [~cnauroth],I am very appreciated! By the way,I 
have watched your wonderful videos on Hadoop Submit in San Jose.:-p

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13405.patch, HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-12763) S3AFileSystem And Hadoop FsShell Operations

2016-08-11 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-12763:
-
Comment: was deleted

(was: Hi,Stephen Montgomery.I have one question.does FsShell "-get" "-rm" work 
in your env?)

> S3AFileSystem And Hadoop FsShell Operations
> ---
>
> Key: HADOOP-12763
> URL: https://issues.apache.org/jira/browse/HADOOP-12763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Stephen Montgomery
>
> Hi,
> I'm looking at the Hadoop S3A Filesystem and FS Shell commands (specifically 
> -ls and -copyFromLocal/Put).
> 1. Create S3 bucket eg test-s3a-bucket.
> 2. List bucket contents using S3A and get an error: 
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3a://test-s3a-bucket/
> 16/02/03 16:31:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://test-s3a-bucket/': No such file or directory
> 3. List bucket contents using S3N and get no results (fair enough):
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3n://test-s3a-bucket/
> 16/02/03 16:32:41 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 4. Attempt to copy a file from local fs to S3A and get an error (with or 
> without the trailing slash):
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -copyFromLocal /tmp/zz 
> s3a://test-s3a-bucket/
> 16/02/03 16:35:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> copyFromLocal: `s3a://test-s3a-bucket/': No such file or directory
> 5. Attempt to copy a file from local fs to S3N and works:
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -copyFromLocal /tmp/zz 
> s3n://test-s3a-bucket/
> 16/02/03 16:36:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' writing to tempfile 
> '/tmp/hadoop-monty/s3/output-9212095517127973121.tmp'
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' closed. Now beginning upload
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' upload complete
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3a://test-s3a-bucket/
> 16/02/03 16:36:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-rw-rw-   1200 2016-02-03 16:36 s3a://test-s3a-bucket/zz
> It seems that basic filesystem operations can't be performed with an 
> empty/new bucket. I have been able to populate buckets with distcp but I 
> wonder if this is because I was copying directories instead of individual 
> files.
> I know that S3A uses AmazonS3 client and S3N uses jet3t so different 
> underlying implementations/potentially different behaviours but I mainly used 
> s3n for illustration purposes (and it looks like it's working as expected).
> Can someone confirm this behaviour. Is it expected?
> Thanks,
> Stephen



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13222) s3a.mkdirs() to delete empty fake parent directories

2016-07-29 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15398947#comment-15398947
 ] 

Shen Yinjie edited comment on HADOOP-13222 at 7/29/16 8:36 AM:
---

I firstly create object {{aa/bb}} with aws-java-sdk, and it is not empty, it is 
ok to create object {{aa/bb/cc/dd}} (with sdk or hadoop shell). but when excute 
hadoop fs -ls and -cat with  {{aa/bb}}, it will only work on the file 
{{aa/bb}},though it is reasonable for s3a .


was (Author: shenyinjie):
I firstly create object {{aa/bb}} with aws-java-sdk, and it is not empty, it is 
ok to create object {{aa/bb/cc/dd}} (with sdk or hadoop shell). but when excute 
hadoop fs -ls and -cat with  {{aa/bb}}, something wrong will appear,though it 
is reasonable for s3a .

> s3a.mkdirs() to delete empty fake parent directories
> 
>
> Key: HADOOP-13222
> URL: https://issues.apache.org/jira/browse/HADOOP-13222
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AFileSystem.mkdirs()}} has a TODO comment: what do do about fake parent 
> directories.
> The answer is: as with files, they should be deleted. This can be done 
> asynchronously



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13222) s3a.mkdirs() to delete empty fake parent directories

2016-07-29 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15398947#comment-15398947
 ] 

Shen Yinjie edited comment on HADOOP-13222 at 7/29/16 8:25 AM:
---

I firstly create object {{aa/bb}} with aws-java-sdk, and it is not empty, it is 
ok to create object {{aa/bb/cc/dd}} (with sdk or hadoop shell). but when excute 
hadoop fs -ls and -cat with  {{aa/bb}}, something wrong will appear,though it 
is reasonable for s3a .


was (Author: shenyinjie):
I firstly create object {{aa/bb}} with aws-java-sdk, and it is not empty, it is 
ok to create object {{aa/bb/cc/dd}} (with sdk or hadoop shell). but when excute 
hadoop fs -ls and -cat with  {{aa/bb}}, something wrong will appear,though it 
is reasonable in s3a itself.

> s3a.mkdirs() to delete empty fake parent directories
> 
>
> Key: HADOOP-13222
> URL: https://issues.apache.org/jira/browse/HADOOP-13222
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AFileSystem.mkdirs()}} has a TODO comment: what do do about fake parent 
> directories.
> The answer is: as with files, they should be deleted. This can be done 
> asynchronously



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13222) s3a.mkdirs() to delete empty fake parent directories

2016-07-29 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15398947#comment-15398947
 ] 

Shen Yinjie commented on HADOOP-13222:
--

I firstly create object {{aa/bb}} with aws-java-sdk, and it is not empty, it is 
ok to create object {{aa/bb/cc/dd}} (with sdk or hadoop shell). but when excute 
hadoop fs -ls and -cat with  {{aa/bb}}, something wrong will appear,though it 
is reasonable in s3a itself.

> s3a.mkdirs() to delete empty fake parent directories
> 
>
> Key: HADOOP-13222
> URL: https://issues.apache.org/jira/browse/HADOOP-13222
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AFileSystem.mkdirs()}} has a TODO comment: what do do about fake parent 
> directories.
> The answer is: as with files, they should be deleted. This can be done 
> asynchronously



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-07-23 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390929#comment-15390929
 ] 

Shen Yinjie commented on HADOOP-13405:
--

Thanks for your comment,[~ste...@apache.org]. I had it tested in my local 
enviroment,but I still have no idea how to add unit test for this, should I 
test these values actually lead to different user-access to object in s3a? or 
can you give me any tips :p

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-07-22 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13405:
-
Attachment: HADOOP-13405.patch

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-07-22 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13405:
-
Attachment: (was: HADOOP-13405.patch)

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-07-22 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389111#comment-15389111
 ] 

Shen Yinjie commented on HADOOP-13294:
--

HADOOP-13311

> Test hadoop fs shell against s3a; fix problems
> --
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
> generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-07-22 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389101#comment-15389101
 ] 

Shen Yinjie edited comment on HADOOP-13294 at 7/22/16 7:44 AM:
---

It seems that hadoop shell does not support create or delete bucket, only can 
do operations on objects in certain bucket.


was (Author: shenyinjie):
It seems that hadoop shell can't work on bucket itself(such as create a bucket 
or delete bucket...);only can do operations on objects in certain bucket.

> Test hadoop fs shell against s3a; fix problems
> --
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
> generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-07-22 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13294:
-
Comment: was deleted

(was: I met the same problem ,and found that some other commands can't work 
well,such as -get,-cat...I have no idea whether my local env or confiuration 
has something wrong?)

> Test hadoop fs shell against s3a; fix problems
> --
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
> generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-07-22 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389101#comment-15389101
 ] 

Shen Yinjie commented on HADOOP-13294:
--

It seems that hadoop shell can't work on bucket itself(such as create a bucket 
or delete bucket...);only can do operations on objects in certain bucket.

> Test hadoop fs shell against s3a; fix problems
> --
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
> generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-07-21 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13405:
-
Summary: doc for “fs.s3a.acl.default” indicates incorrect values  (was: doc 
for “fs.s3a.acl.default” indicates wrong values)

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values

2016-07-21 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13405:
-
Description: 
description for "fs.s3a.acl.default" indicates its values are 
"private,public-read";
when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
{{-ls: No enum constant 
com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
while in amazon-sdk ,
{code}
public enum CannedAccessControlList {
  Private("private"),
  PublicRead("public-read"),
  PublicReadWrite("public-read-write"),
  AuthenticatedRead("authenticated-read"),
  LogDeliveryWrite("log-delivery-write"),
  BucketOwnerRead("bucket-owner-read"),
  BucketOwnerFullControl("bucket-owner-full-control"); 
{code}

so values should be enum values as "Private","PublicRead"...
 attached simple patch.


  was:
description for "fs.s3a.acl.default" indicates its values are 
"private,public-read";
when set value be public-read,
{{No enum constant 
com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
while in amazon-sdk ,
{code}
public enum CannedAccessControlList {
  Private("private"),
  PublicRead("public-read"),
  PublicReadWrite("public-read-write"),
  AuthenticatedRead("authenticated-read"),
  LogDeliveryWrite("log-delivery-write"),
  BucketOwnerRead("bucket-owner-read"),
  BucketOwnerFullControl("bucket-owner-full-control"); 
{code}

so values should be enum values as "Private","PublicRead"...
 attached simple patch.



> doc for “fs.s3a.acl.default” indicates wrong values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values

2016-07-21 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13405:
-
Status: Patch Available  (was: Open)

> doc for “fs.s3a.acl.default” indicates wrong values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,
> {{No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values

2016-07-21 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-13405:
-
Attachment: HADOOP-13405.patch

> doc for “fs.s3a.acl.default” indicates wrong values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,
> {{No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates wrong values

2016-07-21 Thread Shen Yinjie (JIRA)
Shen Yinjie created HADOOP-13405:


 Summary: doc for “fs.s3a.acl.default” indicates wrong values
 Key: HADOOP-13405
 URL: https://issues.apache.org/jira/browse/HADOOP-13405
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0, 3.0.0-alpha2
Reporter: Shen Yinjie
Priority: Minor
 Fix For: 3.0.0-alpha2


description for "fs.s3a.acl.default" indicates its values are 
"private,public-read";
when set value be public-read,
{{No enum constant 
com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
while in amazon-sdk ,
{code}
public enum CannedAccessControlList {
  Private("private"),
  PublicRead("public-read"),
  PublicReadWrite("public-read-write"),
  AuthenticatedRead("authenticated-read"),
  LogDeliveryWrite("log-delivery-write"),
  BucketOwnerRead("bucket-owner-read"),
  BucketOwnerFullControl("bucket-owner-full-control"); 
{code}

so values should be enum values as "Private","PublicRead"...
 attached simple patch.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12763) S3AFileSystem And Hadoop FsShell Operations

2016-07-14 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376730#comment-15376730
 ] 

Shen Yinjie commented on HADOOP-12763:
--

Hi,Stephen Montgomery.I have one question.does FsShell "-get" "-rm" work in 
your env?

> S3AFileSystem And Hadoop FsShell Operations
> ---
>
> Key: HADOOP-12763
> URL: https://issues.apache.org/jira/browse/HADOOP-12763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Stephen Montgomery
>
> Hi,
> I'm looking at the Hadoop S3A Filesystem and FS Shell commands (specifically 
> -ls and -copyFromLocal/Put).
> 1. Create S3 bucket eg test-s3a-bucket.
> 2. List bucket contents using S3A and get an error: 
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3a://test-s3a-bucket/
> 16/02/03 16:31:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://test-s3a-bucket/': No such file or directory
> 3. List bucket contents using S3N and get no results (fair enough):
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3n://test-s3a-bucket/
> 16/02/03 16:32:41 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 4. Attempt to copy a file from local fs to S3A and get an error (with or 
> without the trailing slash):
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -copyFromLocal /tmp/zz 
> s3a://test-s3a-bucket/
> 16/02/03 16:35:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> copyFromLocal: `s3a://test-s3a-bucket/': No such file or directory
> 5. Attempt to copy a file from local fs to S3N and works:
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -copyFromLocal /tmp/zz 
> s3n://test-s3a-bucket/
> 16/02/03 16:36:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' writing to tempfile 
> '/tmp/hadoop-monty/s3/output-9212095517127973121.tmp'
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' closed. Now beginning upload
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' upload complete
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3a://test-s3a-bucket/
> 16/02/03 16:36:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-rw-rw-   1200 2016-02-03 16:36 s3a://test-s3a-bucket/zz
> It seems that basic filesystem operations can't be performed with an 
> empty/new bucket. I have been able to populate buckets with distcp but I 
> wonder if this is because I was copying directories instead of individual 
> files.
> I know that S3A uses AmazonS3 client and S3N uses jet3t so different 
> underlying implementations/potentially different behaviours but I mainly used 
> s3n for illustration purposes (and it looks like it's working as expected).
> Can someone confirm this behaviour. Is it expected?
> Thanks,
> Stephen



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13294) Test hadoop fs shell against s3a; fix problems

2016-07-13 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376319#comment-15376319
 ] 

Shen Yinjie commented on HADOOP-13294:
--

I met the same problem ,and found that some other commands can't work well,such 
as -get,-cat...I have no idea whether my local env or confiuration has 
something wrong?

> Test hadoop fs shell against s3a; fix problems
> --
>
> Key: HADOOP-13294
> URL: https://issues.apache.org/jira/browse/HADOOP-13294
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's no tests of {{hadoop -fs}} commands against s3a; add some. Ideally, 
> generic to all object stores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org