[jira] [Updated] (HDFS-15191) EOF when reading legacy buffer in BlockTokenIdentifier
[ https://issues.apache.org/jira/browse/HDFS-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated HDFS-15191: -- Fix Version/s: (was: 3.3.1) 3.3.0 > EOF when reading legacy buffer in BlockTokenIdentifier > -- > > Key: HDFS-15191 > URL: https://issues.apache.org/jira/browse/HDFS-15191 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.2.1 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Major > Fix For: 3.3.0, 3.2.2 > > Attachments: HDFS-15191-001.patch, HDFS-15191-002.patch, > HDFS-15191.003.patch, HDFS-15191.004.patch > > > We have an HDFS client application which recently upgraded from 3.2.0 to > 3.2.1. After this upgrade (but not before), we sometimes see these errors > when this application is used with clusters still running Hadoop 2.x (more > specifically CDH 5.12.1): > {code} > WARN [2020-02-24T00:54:32.856Z] > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing > remote block reader. (_sampled: true) > java.io.EOFException: > at java.io.DataInputStream.readByte(DataInputStream.java:272) > at > org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308) > at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329) > at > org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240) > at > org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221) > at > org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:227) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:170) > at > org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:730) > at > org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2942) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:822) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:747) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:380) > at > org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:575) > at > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:757) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829) > at java.io.DataInputStream.read(DataInputStream.java:100) > at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2314) > at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270) > at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2291) > at org.apache.commons.io.IOUtils.copy(IOUtils.java:2246) > at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:765) > {code} > We get this warning for all DataNodes with a copy of the block, so the read > fails. > I haven't been able to figure out what changed between 3.2.0 and 3.2.1 to > cause this, but HDFS-13617 and HDFS-14611 seem related, so tagging > [~vagarychen] in case you have any ideas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15191) EOF when reading legacy buffer in BlockTokenIdentifier
[ https://issues.apache.org/jira/browse/HDFS-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17214526#comment-17214526 ] Pierre Villard commented on HDFS-15191: --- Updated the fix versions as this is in 3.3.0: https://github.com/apache/hadoop/commit/f531a4a487c9133bce20d08e09da4d4a35bff13d > EOF when reading legacy buffer in BlockTokenIdentifier > -- > > Key: HDFS-15191 > URL: https://issues.apache.org/jira/browse/HDFS-15191 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.2.1 >Reporter: Steven Rand >Assignee: Steven Rand >Priority: Major > Fix For: 3.3.0, 3.2.2 > > Attachments: HDFS-15191-001.patch, HDFS-15191-002.patch, > HDFS-15191.003.patch, HDFS-15191.004.patch > > > We have an HDFS client application which recently upgraded from 3.2.0 to > 3.2.1. After this upgrade (but not before), we sometimes see these errors > when this application is used with clusters still running Hadoop 2.x (more > specifically CDH 5.12.1): > {code} > WARN [2020-02-24T00:54:32.856Z] > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing > remote block reader. (_sampled: true) > java.io.EOFException: > at java.io.DataInputStream.readByte(DataInputStream.java:272) > at > org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308) > at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329) > at > org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240) > at > org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221) > at > org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:227) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:170) > at > org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:730) > at > org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2942) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:822) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:747) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:380) > at > org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:575) > at > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:757) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829) > at java.io.DataInputStream.read(DataInputStream.java:100) > at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2314) > at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270) > at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2291) > at org.apache.commons.io.IOUtils.copy(IOUtils.java:2246) > at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:765) > {code} > We get this warning for all DataNodes with a copy of the block, so the read > fails. > I haven't been able to figure out what changed between 3.2.0 and 3.2.1 to > cause this, but HDFS-13617 and HDFS-14611 seem related, so tagging > [~vagarychen] in case you have any ideas. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)
[ https://issues.apache.org/jira/browse/HDFS-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15863580#comment-15863580 ] Pierre Villard commented on HDFS-11391: --- [~yzhangal], anything else required on my side to get the PR reviewed/merged? > Numeric usernames do no work with WebHDFS FS (write access) > --- > > Key: HDFS-11391 > URL: https://issues.apache.org/jira/browse/HDFS-11391 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Pierre Villard >Assignee: Pierre Villard > > In HDFS-4983, a property has been introduced to configure the pattern > validating name of users interacting with WebHDFS because default pattern was > excluding names starting with numbers. > Problem is that this fix works only for read access. In case of write access > against data node, the default pattern is still applied whatever the > configuration is. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)
[ https://issues.apache.org/jira/browse/HDFS-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard reassigned HDFS-11391: - Assignee: Pierre Villard > Numeric usernames do no work with WebHDFS FS (write access) > --- > > Key: HDFS-11391 > URL: https://issues.apache.org/jira/browse/HDFS-11391 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Pierre Villard >Assignee: Pierre Villard > > In HDFS-4983, a property has been introduced to configure the pattern > validating name of users interacting with WebHDFS because default pattern was > excluding names starting with numbers. > Problem is that this fix works only for read access. In case of write access > against data node, the default pattern is still applied whatever the > configuration is. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)
[ https://issues.apache.org/jira/browse/HDFS-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15853634#comment-15853634 ] Pierre Villard commented on HDFS-11391: --- Hi [~yzhangal], No problem, I updated the PR. Regarding the tests: - *Before patch (with property modified to allow numerical user names)* {noformat} $ curl -i -X PUT "http://mynode:50070/webhdfs/v1/tmp/test.txt?op=CREATE&user.name=123"; HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Sat, 04 Feb 2017 10:19:38 GMT Date: Sat, 04 Feb 2017 10:19:38 GMT Pragma: no-cache Expires: Sat, 04 Feb 2017 10:19:38 GMT Date: Sat, 04 Feb 2017 10:19:38 GMT Pragma: no-cache X-FRAME-OPTIONS: SAMEORIGIN Set-Cookie: hadoop.auth="u=123&p=123&t=simple&e=1486239578624&s=UrzCjP0SPpPKDJnSYB5BsKuQVKc="; Path=/; HttpOnly Location: http://mynode:50075/webhdfs/v1/tmp/test.txt?op=CREATE&user.name=123&namenoderpcaddress=mynode:8020&createflag=&createparent=true&overwrite=false Content-Type: application/octet-stream Content-Length: 0 $ curl -i -X PUT -T test.txt "http://mynode:50075/webhdfs/v1/tmp/test.txt?op=CREATE&user.name=123&namenoderpcaddress=mynode:8020&createflag=&createparent=true&overwrite=false"; HTTP/1.1 400 Bad Request Content-Type: application/json; charset=utf-8 Content-Length: 209 Connection: close {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Invalid value: \"123\" does not belong to the domain ^[A-Za-z_][A-Za-z0-9._-]*[$]?$"}} {noformat} - *After patch (with property modified to allow numerical user names)* {noformat} $ curl -i -X PUT "http://mynode:50070/webhdfs/v1/tmp/test.txt?op=CREATE&user.name=123"; HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Sat, 04 Feb 2017 20:25:15 GMT Date: Sat, 04 Feb 2017 20:25:15 GMT Pragma: no-cache Expires: Sat, 04 Feb 2017 20:25:15 GMT Date: Sat, 04 Feb 2017 20:25:15 GMT Pragma: no-cache X-FRAME-OPTIONS: SAMEORIGIN Set-Cookie: hadoop.auth="u=123&p=123&t=simple&e=1486275915563&s=te9ylMEmTuFswBr2sK9kH6qj8eE="; Path=/; HttpOnly Location: http://mynode:50075/webhdfs/v1/tmp/test.txt?op=CREATE&user.name=123&namenoderpcaddress=mynode:8020&createflag=&createparent=true&overwrite=false Content-Type: application/octet-stream Content-Length: 0 $ curl -i -X PUT -T test.txt "http://mynode:50075/webhdfs/v1/tmp/test.txt?op=CREATE&user.name=123&namenoderpcaddress=mynode:8020&createflag=&createparent=true&overwrite=false"; HTTP/1.1 100 Continue HTTP/1.1 201 Created Location: hdfs://mynode:8020/tmp/test.txt Content-Length: 0 Connection: close {noformat} Let me know if you need something else. > Numeric usernames do no work with WebHDFS FS (write access) > --- > > Key: HDFS-11391 > URL: https://issues.apache.org/jira/browse/HDFS-11391 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Pierre Villard > > In HDFS-4983, a property has been introduced to configure the pattern > validating name of users interacting with WebHDFS because default pattern was > excluding names starting with numbers. > Problem is that this fix works only for read access. In case of write access > against data node, the default pattern is still applied whatever the > configuration is. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)
[ https://issues.apache.org/jira/browse/HDFS-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated HDFS-11391: -- Status: Patch Available (was: Open) > Numeric usernames do no work with WebHDFS FS (write access) > --- > > Key: HDFS-11391 > URL: https://issues.apache.org/jira/browse/HDFS-11391 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Pierre Villard > > In HDFS-4983, a property has been introduced to configure the pattern > validating name of users interacting with WebHDFS because default pattern was > excluding names starting with numbers. > Problem is that this fix works only for read access. In case of write access > against data node, the default pattern is still applied whatever the > configuration is. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11391) Numeric usernames do no work with WebHDFS FS (write access)
Pierre Villard created HDFS-11391: - Summary: Numeric usernames do no work with WebHDFS FS (write access) Key: HDFS-11391 URL: https://issues.apache.org/jira/browse/HDFS-11391 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.7.3 Reporter: Pierre Villard In HDFS-4983, a property has been introduced to configure the pattern validating name of users interacting with WebHDFS because default pattern was excluding names starting with numbers. Problem is that this fix works only for read access. In case of write access against data node, the default pattern is still applied whatever the configuration is. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org