[jira] [Created] (HDFS-13406) Encryption zone creation fails with key names that are generally valid

2018-04-05 Thread David Tucker (JIRA)
David Tucker created HDFS-13406:
---

 Summary: Encryption zone creation fails with key names that are 
generally valid
 Key: HDFS-13406
 URL: https://issues.apache.org/jira/browse/HDFS-13406
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: kms, namenode
Affects Versions: 3.0.0
 Environment: non-Kerberized
Reporter: David Tucker


Under load (up to 24 clients trying to append to a non-existent file in 
separate encryption zones simultaneously), the KMS returns a 400 for a 
character that is not present in the path or key name. For example,
{code:java}
IOException: ERROR_APPLICATION: HTTP status [400], message [Illegal character 
0xA]
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:174)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:540)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:536)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:501)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:877)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:393)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$13.call(LoadBalancingKMSClientProvider.java:390)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:123)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.getMetadata(LoadBalancingKMSClientProvider.java:390)
at 
org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:124)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7002)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2036)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1448)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
{code}
I have a number of path/key pairs that were used to trigger this, but I've 
tried hard-coding one of them, and it seems to work fine:
{code}
('Path:', 
u'/chai_test37791178-e643-4ade-bbea-3a46a368eb01-\u0531\u308b\ufea1\u10d6\ufe8f\uc738\u2116\u05e1\u0562\u0be9\u050e/chai_testdc99561d-c64b-48c2-a1b2-203eecd424b0-\u0718\u05f1\u0133\u06f3\u2083\u316a\u0ec4\u07cb\u22e9\xa5\u07d0\u318d\xbc\u0e01\u02e8\u0277\u316a\u2030\u3113\u0e81\u9fa5/chai_test8f897e3a-fed6-4e5a-ae3e-524c73b760cc-\u05d0\u10fb\u3405\u0105\u2100\u0e0d\u0256\u9fa5\u0baa\u10f5\u0429\u2122\u05d0\u3106\u02b1\u3113\u2014\u2122\u05d0\ud7a3')
('Key:', 

[jira] [Created] (HDFS-13392) Incorrect length in Truncate CloseEvents

2018-04-03 Thread David Tucker (JIRA)
David Tucker created HDFS-13392:
---

 Summary: Incorrect length in Truncate CloseEvents
 Key: HDFS-13392
 URL: https://issues.apache.org/jira/browse/HDFS-13392
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: David Tucker


Under stress (multiple clients truncating separate non-empty files in half 
simultaneously), the CloseEvent triggered by a Truncate RPC may contain an 
incorrect length. We're able to reproduce this reliably ~20% of the time (our 
tests are somewhat randomized/fuzzy).

For example, given this Truncate request:
{noformat}
Request:
  truncate {
src: 
"/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-\357\254\200\357\272\217\357\255\217\343\203\276\324\262\342\204\200\342\213\251/chai_testbd968366-0016-4462-ac12-e48e0487bebd-\340\270\215\334\200\311\226\342\202\242\343\202\236\340\256\205\357\272\217/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-\312\254\340\272\201\343\202\242\306\220\340\244\205\342\202\242\343\204\270a\334\240\337\213\340\244\240\343\200\243\342\202\243\343\203\276\313\225\346\206\250"
newLength: 2003855
clientName: 
"\341\264\275\327\220\343\203\250\333\263\343\220\205\357\254\227\340\270\201\340\245\251\306\225\341\203\265\334\220\342\202\243\343\204\206!A\343\206\215\357\254\201\340\273\223\347\224\260"
  }
  Block Size: 1048576B
  Old length: 4007711B (3.82205104828 blocks)
  Truncation: 2003856B (1.91102600098 blocks)
  New length: 2003855B (1.9110250473 blocks)
Response:
  result: true
{noformat}
We see these INotify events:
{noformat}
TruncateEvent {
path: 
/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-ffﺏﭏヾԲ℀⋩/chai_testbd968366-0016-4462-ac12-e48e0487bebd-ญ܀ɖ₢ゞஅﺏ/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-ʬກアƐअ₢ㄸaܠߋठ〣₣ヾ˕憨
length: 2003855
timestamp: 1522716573143
}
{noformat}
{noformat}
CloseEvent {
path: 
/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-ffﺏﭏヾԲ℀⋩/chai_testbd968366-0016-4462-ac12-e48e0487bebd-ญ܀ɖ₢ゞஅﺏ/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-ʬກアƐअ₢ㄸaܠߋठ〣₣ヾ˕憨
length: -2
timestamp: 1522716575723
}
{noformat}
{{-2}} is not the only number that shows up as the length, 
{{9223372036854775807}} is common too. These are detected by Python 2 tests, 
and the latter is {{sys.maxint}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12846) WebHDFS/Jetty misinterprets empty path

2017-11-21 Thread David Tucker (JIRA)
David Tucker created HDFS-12846:
---

 Summary: WebHDFS/Jetty misinterprets empty path
 Key: HDFS-12846
 URL: https://issues.apache.org/jira/browse/HDFS-12846
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.8.0
 Environment: HDP 2.6 + Ambari 2.6.0
Reporter: David Tucker


WebHDFS sees the wrong path when a request does not provide one.
For example, GETFILESTATUS on an empty path results in a FileNotFoundException:
{code}
$ curl -sS -L -w '%{http_code}' -X GET 
'http://172.18.0.3:50070/webhdfs/v1?op=GETFILESTATUS=hdfs'
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
 does not exist: /webhdfs/v1"}}404
{code}
Note the message: the RPC is seeing an incorrect path (/webhdfs/v1).

Because of AMBARI-22492, this leads to unexpected behaviors when deploying with 
Ambari:
- GETFILESTATUS is issued as /webhdfs/v1?op=GETFILESTATUS which results in a 
FileNotFoundException.
- Since Ambari was told the path doesn't exist, it tries to create it with 
MKDIRS (which succeeds!):

{code}
$ curl -sS -L -w '%{http_code}' -X PUT 
'http://172.18.0.3:50070/webhdfs/v1?op=MKDIRS=hdfs'
{"boolean":true}200
{code}
{code}
# hdfs dfs -ls -R /webhdfs
drwx--   - hive hadoop  0 2017-11-20 23:24 /webhdfs/v1
ls: Permission denied: user=root, access=READ_EXECUTE, 
inode="/webhdfs/v1":hive:hadoop:drwx--
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-31 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950374#comment-15950374
 ] 

David Tucker edited comment on HDFS-11557 at 3/31/17 6:00 AM:
--

[~vagarychen], I believe macOS is POSIX-compliant, and this is what I see on my 
MacBook Pro:

{code:none}
david@macbook:/tmp $ ls -ld .
drwxrwxrwt  12 root  wheel  408 Mar 30 22:46 .
david@macbook:/tmp $ mkdir foo
david@macbook:/tmp $ chmod 222 foo
david@macbook:/tmp $ ls -l foo
ls: foo: Permission denied
david@macbook:/tmp $ rm -r foo
rm: foo: Permission denied
david@macbook:/tmp $ rmdir foo
david@macbook:/tmp $ 
{code}

FWIW, I have observed similar behavior on OneFS.


was (Author: dmtucker):
[~vagarychen], I believe macOS is POSIX-compliant, and this is what I see on my 
MacBook Pro:

{code: none}
david@macbook:/tmp $ ls -ld .
drwxrwxrwt  12 root  wheel  408 Mar 30 22:46 .
david@macbook:/tmp $ mkdir foo
david@macbook:/tmp $ chmod 222 foo
david@macbook:/tmp $ ls -l foo
ls: foo: Permission denied
david@macbook:/tmp $ rm -r foo
rm: foo: Permission denied
david@macbook:/tmp $ rmdir foo
david@macbook:/tmp $ 
{code}

FWIW, I have observed similar behavior on OneFS.

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-31 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950374#comment-15950374
 ] 

David Tucker edited comment on HDFS-11557 at 3/31/17 5:59 AM:
--

[~vagarychen], I believe macOS is POSIX-compliant, and this is what I see on my 
MacBook Pro:

{code: none}
david@macbook:/tmp $ ls -ld .
drwxrwxrwt  12 root  wheel  408 Mar 30 22:46 .
david@macbook:/tmp $ mkdir foo
david@macbook:/tmp $ chmod 222 foo
david@macbook:/tmp $ ls -l foo
ls: foo: Permission denied
david@macbook:/tmp $ rm -r foo
rm: foo: Permission denied
david@macbook:/tmp $ rmdir foo
david@macbook:/tmp $ 
{code}

FWIW, I have observed similar behavior on OneFS.


was (Author: dmtucker):
[~vagarychen], I believe macOS is POSIX-compliant, and this is what I see on my 
MacBook Pro:

david@macbook:/tmp $ ls -ld .
drwxrwxrwt  12 root  wheel  408 Mar 30 22:46 .
david@macbook:/tmp $ mkdir foo
david@macbook:/tmp $ chmod 222 foo
david@macbook:/tmp $ ls -l foo
ls: foo: Permission denied
david@macbook:/tmp $ rm -r foo
rm: foo: Permission denied
david@macbook:/tmp $ rmdir foo
david@macbook:/tmp $ 

FWIW, I have observed similar behavior on OneFS.

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-31 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950374#comment-15950374
 ] 

David Tucker commented on HDFS-11557:
-

[~vagarychen], I believe macOS is POSIX-compliant, and this is what I see on my 
MacBook Pro:

david@macbook:/tmp $ ls -ld .
drwxrwxrwt  12 root  wheel  408 Mar 30 22:46 .
david@macbook:/tmp $ mkdir foo
david@macbook:/tmp $ chmod 222 foo
david@macbook:/tmp $ ls -l foo
ls: foo: Permission denied
david@macbook:/tmp $ rm -r foo
rm: foo: Permission denied
david@macbook:/tmp $ rmdir foo
david@macbook:/tmp $ 

FWIW, I have observed similar behavior on OneFS.

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-29 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15947894#comment-15947894
 ] 

David Tucker commented on HDFS-11557:
-

[~vagarychen], so this is a feature, not a bug?

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946049#comment-15946049
 ] 

David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:none}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code:none}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}


was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:


[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946049#comment-15946049
 ] 

David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}


was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:python}

[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946049#comment-15946049
 ] 

David Tucker commented on HDFS-11557:
-

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:python}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
>  

[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946049#comment-15946049
 ] 

David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:none}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}


was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code}
>>> 

[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945705#comment-15945705
 ] 

David Tucker commented on HDFS-11557:
-

[~vagarychen], please, by all means! I have nothing substantial to add at the 
moment.

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-21 Thread David Tucker (JIRA)
David Tucker created HDFS-11557:
---

 Summary: Empty directories may be recursively deleted without 
being listable
 Key: HDFS-11557
 URL: https://issues.apache.org/jira/browse/HDFS-11557
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.7.3
Reporter: David Tucker


To reproduce, create a directory without read and/or execute permissions (i.e. 
0666, 0333, or 0222), then call delete on it with can_recurse=True. Note that 
the delete succeeds even though the client is unable to check for emptiness 
and, therefore, cannot otherwise know that any/all children are deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5779) Caused by: javax.security.auth.login.LoginException: java.lang.NullPointerException: invalid null input: name

2017-02-07 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15857286#comment-15857286
 ] 

David Tucker commented on HDFS-5779:


For future readers:
I encountered this exact problem while trying to run the NameNode on Docker, 
and the issue was `hdfs` having one UID when running `hdfs namenode -format` 
and a different UID when starting `hdfs namenode`. I suppose mismatches between 
`supergroup` and `hadoop` could potentially cause this too.

> Caused by: javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input: name
> -
>
> Key: HDFS-5779
> URL: https://issues.apache.org/jira/browse/HDFS-5779
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Linux  2.6.32-220.13.1.el6.x86_64 #1 SMP Thu Mar 29 
> 11:46:40 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
> This machine is KEONIZED ie., KEON BoKS security is implemented for this box
>Reporter: Vijay
>
> xx@testhost:/home/xx/Lab/hdfs/namenodep$ hadoop namenode -format
> Warning: $HADOOP_HOME is deprecated.
> 14/01/15 04:51:38 INFO namenode.NameNode: STARTUP_MSG:
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = testhost.testhost1.net/xx.xxx.xxx.xxx
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 1.0.3
> STARTUP_MSG:   build = 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 
> 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> /
> Re-format filesystem in /home/xx/Lab/hdfs/namenodep ? (Y or N) Y
> 14/01/15 04:51:40 INFO util.GSet: VM type   = 32-bit
> 14/01/15 04:51:40 INFO util.GSet: 2% max memory = 19.33375 MB
> 14/01/15 04:51:40 INFO util.GSet: capacity  = 2^22 = 4194304 entries
> 14/01/15 04:51:40 INFO util.GSet: recommended=4194304, actual=4194304
> 14/01/15 04:51:40 ERROR namenode.NameNode: java.io.IOException: failure to 
> login
> at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:490)
> at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:475)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:464)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1162)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1271)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> Caused by: javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input: name
> at com.sun.security.auth.UnixPrincipal.(Unknown Source)
> at com.sun.security.auth.module.UnixLoginModule.login(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at javax.security.auth.login.LoginContext.invoke(Unknown Source)
> at javax.security.auth.login.LoginContext.access$000(Unknown Source)
> at javax.security.auth.login.LoginContext$5.run(Unknown Source)
> at javax.security.auth.login.LoginContext$5.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.login.LoginContext.invokeCreatorPriv(Unknown 
> Source)
> at javax.security.auth.login.LoginContext.login(Unknown Source)
> at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:471)
> at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:475)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:464)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1162)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1271)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> at javax.security.auth.login.LoginContext.invoke(Unknown Source)
> at javax.security.auth.login.LoginContext.access$000(Unknown Source)
> at javax.security.auth.login.LoginContext$5.run(Unknown Source)
> at 

[jira] [Commented] (HDFS-8422) Recursive mkdir removes sticky bit when adding implicit u+wx to intermediate directories

2016-12-01 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713281#comment-15713281
 ] 

David Tucker commented on HDFS-8422:


Jake should probably be unassigned from this issue.

> Recursive mkdir removes sticky bit when adding implicit u+wx to intermediate 
> directories
> 
>
> Key: HDFS-8422
> URL: https://issues.apache.org/jira/browse/HDFS-8422
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Jake Low
>Assignee: Jake Low
>Priority: Minor
> Attachments: HDFS-8422.001.patch, HDFS-8422.002.patch
>
>
> When performing a recursive {{mkdirs}} operation, the NameNode takes the 
> provided {{FsPermission}} and applies it to the final directory that is 
> created. The NameNode also uses these permissions when creating intermediate 
> directories, except it promotes with {{u+wx}} (i.e. {{mode |= 0300}}) in 
> order to ensure that there are sufficient permissions on the new directory to 
> create further subdirectories within it.
> Currently the code that does this permission promotion uses the 
> three-argument {{FsPermission}} constructor, which sets the sticky bit to 
> {{false}}. I think this is probably not the intended behaviour, so I'm 
> opening a bug. I'll attach a patch shortly that changes the behaviour so that 
> the sticky bit is always retained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-23 Thread David Tucker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Tucker updated HDFS-11169:

Description: 
Start with a fresh deployment of HDFS.
1. Create a file.
2. AddBlock the file with an offest larger than the file size.
3. Call GetBlockLocations.

Expectation: 0 blocks are returned because the only added block is incomplete.

Observation: 1 block is returned.

This only seems to occur when 1 block is in play (i.e. if you write a block and 
call AddBlock again, GetBlockLocations seems to behave as expected).

This seems to be related to HDFS-513.
I suspect the following line needs revision: 
https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
I believe it should be >= instead of >:
if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file

  was:
Start with a fresh deployment of HDFS.
1. Create a file.
2. AddBlock the file.
3. Call GetBlockLocations.

Expectation: 0 blocks are returned because the only added block is incomplete.

Observation: 1 block is returned.

This only seems to occur when 1 block is in play (i.e. if you write a block and 
call AddBlock again, GetBlockLocations seems to behave as expected).

This seems to be related to HDFS-513.
I suspect the following line needs revision: 
https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
I believe it should be >= instead of >:
if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file


> GetBlockLocations returns a block when offset > filesize and file only has 1 
> block
> --
>
> Key: HDFS-11169
> URL: https://issues.apache.org/jira/browse/HDFS-11169
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
> Environment: HDP 2.5, Ambari 2.4
>Reporter: David Tucker
>Priority: Minor
>
> Start with a fresh deployment of HDFS.
> 1. Create a file.
> 2. AddBlock the file with an offest larger than the file size.
> 3. Call GetBlockLocations.
> Expectation: 0 blocks are returned because the only added block is incomplete.
> Observation: 1 block is returned.
> This only seems to occur when 1 block is in play (i.e. if you write a block 
> and call AddBlock again, GetBlockLocations seems to behave as expected).
> This seems to be related to HDFS-513.
> I suspect the following line needs revision: 
> https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
> I believe it should be >= instead of >:
> if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11169) GetBlockLocations returns a block when offset > filesize and file only has 1 block

2016-11-22 Thread David Tucker (JIRA)
David Tucker created HDFS-11169:
---

 Summary: GetBlockLocations returns a block when offset > filesize 
and file only has 1 block
 Key: HDFS-11169
 URL: https://issues.apache.org/jira/browse/HDFS-11169
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.1
 Environment: HDP 2.5, Ambari 2.4
Reporter: David Tucker
Priority: Minor


Start with a fresh deployment of HDFS.
1. Create a file.
2. AddBlock the file.
3. Call GetBlockLocations.

Expectation: 0 blocks are returned because the only added block is incomplete.

Observation: 1 block is returned.

This only seems to occur when 1 block is in play (i.e. if you write a block and 
call AddBlock again, GetBlockLocations seems to behave as expected).

This seems to be related to HDFS-513.
I suspect the following line needs revision: 
https://github.com/apache/hadoop/blob/4484b48498b2ab2a40a404c487c7a4e875df10dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1062
I believe it should be >= instead of >:
if (nrBlocks >= 0 && curBlk == nrBlocks)   // offset >= end of file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org