[jira] [Commented] (HADOOP-11605) FilterFileSystem#create with ChecksumOpt should propagate it to wrapped FS
[ https://issues.apache.org/jira/browse/HADOOP-11605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14343821#comment-14343821 ] Lohit Vijayarenu commented on HADOOP-11605: --- +1 Looks good to me. FilterFileSystem#create with ChecksumOpt should propagate it to wrapped FS -- Key: HADOOP-11605 URL: https://issues.apache.org/jira/browse/HADOOP-11605 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.6.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Minor Attachments: HADOOP-11605.001.patch Current create code {code} @Override public FSDataOutputStream create(Path f, FsPermission permission, EnumSetCreateFlag flags, int bufferSize, short replication, long blockSize, Progressable progress, ChecksumOpt checksumOpt) throws IOException { return fs.create(f, permission, flags, bufferSize, replication, blockSize, progress); } {code} does not propagate ChecksumOpt. However, it should be up to the wrapped FS implementation (default is to ignore). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11034) ViewFileSystem is missing getStatus(Path)
[ https://issues.apache.org/jira/browse/HADOOP-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14134468#comment-14134468 ] Lohit Vijayarenu commented on HADOOP-11034: --- +1 Patch looks good to me. I am not sure why this was not part of ViewFileSystem to begin with. May be someone has history about it, otherwise, changes looks good to me. ViewFileSystem is missing getStatus(Path) - Key: HADOOP-11034 URL: https://issues.apache.org/jira/browse/HADOOP-11034 Project: Hadoop Common Issue Type: Bug Components: viewfs Reporter: Gary Steelman Attachments: HADOOP-11034-trunk-1.patch, HADOOP-11034.2.patch This patch implements ViewFileSystem#getStatus(Path), which is currently unimplemented. getStatus(Path) should return the FsStatus of the FileSystem backing the path. Currently it returns the same as getStatus(), which is a default Long.MAX_VALUE for capacity, 0 used, and Long.MAX_VALUE for remaining space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14021117#comment-14021117 ] Lohit Vijayarenu commented on HADOOP-10641: --- Minor comments. - It looks like checkQuorum is kind of noop for submitProposal in ZK based implementation, since zooKeeper.create would fail if there is no quorum anyways? - In ZK based Coordination Engine implementation, how are ZNodes cleaned up? Looking at patch each proposal creates PERSISTENT_SEQUENTIAL, but no mention of cleanup. Introduce Coordination Engine - Key: HADOOP-10641 URL: https://issues.apache.org/jira/browse/HADOOP-10641 Project: Hadoop Common Issue Type: New Feature Affects Versions: 3.0.0 Reporter: Konstantin Shvachko Assignee: Plamen Jeliazkov Attachments: HADOOP-10641.patch, HADOOP-10641.patch, HADOOP-10641.patch Coordination Engine (CE) is a system, which allows to agree on a sequence of events in a distributed system. In order to be reliable CE should be distributed by itself. Coordination Engine can be based on different algorithms (paxos, raft, 2PC, zab) and have different implementations, depending on use cases, reliability, availability, and performance requirements. CE should have a common API, so that it could serve as a pluggable component in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and HBase (HBASE-10909). First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-9985) HDFS Compatible ViewFileSystem
Lohit Vijayarenu created HADOOP-9985: Summary: HDFS Compatible ViewFileSystem Key: HADOOP-9985 URL: https://issues.apache.org/jira/browse/HADOOP-9985 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu Fix For: 2.0.6-alpha There are multiple scripts and projects like pig, hive, elephantbird refer to HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace this causes problem because supported scheme for federation is viewfs:// . We will have to force all users to change their scripts/programs to be able to access federated cluster. It would be great if thee was a way to map viewfs scheme to hdfs scheme without exposing it to users. Opening this JIRA to get inputs from people who have thought about this in their clusters. In our clusters we ended up created another class HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there any suggested approach other than this? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9988) HDFS Compatible ViewFileSystem
Lohit Vijayarenu created HADOOP-9988: Summary: HDFS Compatible ViewFileSystem Key: HADOOP-9988 URL: https://issues.apache.org/jira/browse/HADOOP-9988 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu Fix For: 2.0.6-alpha There are multiple scripts and projects like pig, hive, elephantbird refer to HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace this causes problem because supported scheme for federation is viewfs:// . We will have to force all users to change their scripts/programs to be able to access federated cluster. It would be great if thee was a way to map viewfs scheme to hdfs scheme without exposing it to users. Opening this JIRA to get inputs from people who have thought about this in their clusters. In our clusters we ended up created another class HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there any suggested approach other than this? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9987) HDFS Compatible ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-9987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773428#comment-13773428 ] Lohit Vijayarenu commented on HADOOP-9987: -- Can somebody close this, this was opened by mistake as part of retry of HADOOP-9985. I dont seem to have option to close this JIRA HDFS Compatible ViewFileSystem -- Key: HADOOP-9987 URL: https://issues.apache.org/jira/browse/HADOOP-9987 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu Fix For: 2.0.6-alpha There are multiple scripts and projects like pig, hive, elephantbird refer to HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace this causes problem because supported scheme for federation is viewfs:// . We will have to force all users to change their scripts/programs to be able to access federated cluster. It would be great if thee was a way to map viewfs scheme to hdfs scheme without exposing it to users. Opening this JIRA to get inputs from people who have thought about this in their clusters. In our clusters we ended up created another class HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there any suggested approach other than this? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9986) HDFS Compatible ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-9986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773426#comment-13773426 ] Lohit Vijayarenu commented on HADOOP-9986: -- Can somebody close this, this was opened by mistake as part of retry of HADOOP-9985 HDFS Compatible ViewFileSystem -- Key: HADOOP-9986 URL: https://issues.apache.org/jira/browse/HADOOP-9986 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu Fix For: 2.0.6-alpha There are multiple scripts and projects like pig, hive, elephantbird refer to HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace this causes problem because supported scheme for federation is viewfs:// . We will have to force all users to change their scripts/programs to be able to access federated cluster. It would be great if thee was a way to map viewfs scheme to hdfs scheme without exposing it to users. Opening this JIRA to get inputs from people who have thought about this in their clusters. In our clusters we ended up created another class HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there any suggested approach other than this? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9988) HDFS Compatible ViewFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu resolved HADOOP-9988. -- Resolution: Duplicate Release Note: creating JIRA has some issues, closing this dup of HADOOP-9985 HDFS Compatible ViewFileSystem -- Key: HADOOP-9988 URL: https://issues.apache.org/jira/browse/HADOOP-9988 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu Fix For: 2.0.6-alpha There are multiple scripts and projects like pig, hive, elephantbird refer to HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace this causes problem because supported scheme for federation is viewfs:// . We will have to force all users to change their scripts/programs to be able to access federated cluster. It would be great if thee was a way to map viewfs scheme to hdfs scheme without exposing it to users. Opening this JIRA to get inputs from people who have thought about this in their clusters. In our clusters we ended up created another class HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there any suggested approach other than this? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13770111#comment-13770111 ] Lohit Vijayarenu commented on HADOOP-9631: -- [~cnauroth] Can you please help review latest patch ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Open (was: Patch Available) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, HADOOP-9631.trunk.3.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Patch Available (was: Open) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Attachment: HADOOP-9631.trunk.4.patch Patch to fix javadoc warning. org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks failure does not look to be related to this patch. I see this failure for other patches too. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Attachment: HADOOP-9631.trunk.3.patch Another try at this patch. Problem I was facing was passing down getServerDefaults() call from InternalViewOfDir class. So, in ViewFs::getServerDeaults() did a match of mountpoints and if there exists a matching mountpoint return the filesystem, else return LocalConfig as earlier. This was needed so that we can handle case of FileNotFound, invalid dir/file creation and such. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, HADOOP-9631.trunk.3.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Patch Available (was: Open) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, HADOOP-9631.trunk.3.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686409#comment-13686409 ] Lohit Vijayarenu commented on HADOOP-9631: -- Taking a look at how ViewFs mountable is constructed, it looks there is no straightforward way to get TargetFileSystem given mount point or path. I started with changing getServerDefaults to accept Path. Now, create call would pass a Path which does not exists. So, we will have to resolve the Path to mountpoint. I see that ViewFileSystem::resolve can fetch the mountpoint, but not an easy way to fetch underlying FileSystem. '/' seem to always resolve to InteralViewOfDir which does not have targetFileSytem. Any suggestions of easy way to fetch this? ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9557) hadoop-client excludes commons-httpclient
[ https://issues.apache.org/jira/browse/HADOOP-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683525#comment-13683525 ] Lohit Vijayarenu commented on HADOOP-9557: -- Can anyone please review/commit this. hadoop-client excludes commons-httpclient - Key: HADOOP-9557 URL: https://issues.apache.org/jira/browse/HADOOP-9557 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9557.1.patch hadoop-client pom excludes commons-httpclient jar, while this is used while running in pig in local mode and also httpfs clients -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683523#comment-13683523 ] Lohit Vijayarenu commented on HADOOP-9631: -- Can anyone please review this. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Open (was: Patch Available) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683631#comment-13683631 ] Lohit Vijayarenu commented on HADOOP-9631: -- [~cnauroth] Thanks for the review. I see the problem where viewfs might be mapped to different filesystems. Let me revisit my initial patch which was trying to use Path to resolve to FileSystem mountpoint and update JIRA. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Attachment: HADOOP-9631.trunk.1.patch Attached is patch which deprecates getServerDefaults() and adds getServerDefaults(Path). Tested this by deploying on one of our YARN cluster and could see App logs getting created with replication factor of 3 instead of 1. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Patch Available (was: Open) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Open (was: Patch Available) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Status: Patch Available (was: Open) ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Attachment: HADOOP-9631.trunk.2.patch While trying to see why testcases failed, realized that there is easy way to this. In the end each filesystem has ServerDefaults, so in viewfs we just had to pick right filesystem and pass it down. Made a change to use homedirectory path and chose underlying filesystem. Attaching new patch with this change. Now all viewfs tests as well as new test works. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
Lohit Vijayarenu created HADOOP-9631: Summary: ViewFs should use underlying FileSystem's server side defaults Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults
[ https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9631: - Attachment: TestFileContext.java We see a case where NodeManager Log Aggregator service uses FileContext to move container log from local filesystem to HDFS. If default filesystem on HDFS cluster is set to viewfs:/// then FileContext would internally end up using ViewFs. While doing so, any files created via FileContext would be created with replication factor of 1. This is because ViewFs getServerDefaults seems to return local fileystem defaults {noformat} @Override public FsServerDefaults getServerDefaults() throws IOException { return LocalConfigKeys.getServerDefaults(); } {noformat} This would become problem for anyone using new FileContext API running on top of a federated namespace. Note that this is not problem with FileSystem APIs. Attached is a test program which creates file using both APIs on HDFS and doing ls on that file shows both have different replication factor. File created using FileContext has replication factor of 1, while file created using FileSystem has replication factor of 3. ViewFs should use underlying FileSystem's server side defaults -- Key: HADOOP-9631 URL: https://issues.apache.org/jira/browse/HADOOP-9631 Project: Hadoop Common Issue Type: Bug Components: fs, viewfs Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: TestFileContext.java On a cluster with ViewFS as default FileSystem, creating files using FileContext will always result with replication factor of 1, instead of underlying filesystem default (like HDFS) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9564) DFSClient$DFSOutputStream.closeInternal locks up waiting for namenode.complete
[ https://issues.apache.org/jira/browse/HADOOP-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658121#comment-13658121 ] Lohit Vijayarenu commented on HADOOP-9564: -- Will try to see if this is something specific to environment and update this JIRA. DFSClient$DFSOutputStream.closeInternal locks up waiting for namenode.complete -- Key: HADOOP-9564 URL: https://issues.apache.org/jira/browse/HADOOP-9564 Project: Hadoop Common Issue Type: Bug Components: fs Reporter: Jin Feng Priority: Minor Hi, Our component uses FileSystem.copyFromLocalFile to copy a local file to HDFS cluster. It's working fine in production environment. Its integration tests used to run fine on our dev's local Mac laptop until recently (exact point of time unknown) our tests started to freeze up very frequently with this stack: {code} java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x000152f41378 (a java.util.concurrent.FutureTask$Sync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303) at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:248) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790) - locked 0x00014f568720 (a java.lang.Object) at org.apache.hadoop.ipc.Client.call(Client.java:1080) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at $Proxy37.complete(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy37.complete(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3566) - locked 0x000152f3f658 (a org.apache.hadoop.hdfs.DFSClient$DFSOutputStream) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3481) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:89) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:224) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1295) {code} our version is 0.20.2.cdh3u2-t1. In the test suite, we use org.apache.hadoop.hdfs.MiniDFSCluster. I've searched around couldn't find anything resembles this symptom, any helps are really appreciated! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9557) hadoop-client excludes commons-httpclient
[ https://issues.apache.org/jira/browse/HADOOP-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9557: - Status: Patch Available (was: Open) hadoop-client excludes commons-httpclient - Key: HADOOP-9557 URL: https://issues.apache.org/jira/browse/HADOOP-9557 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9557.1.patch hadoop-client pom excludes commons-httpclient jar, while this is used while running in pig in local mode and also httpfs clients -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9557) hadoop-client excludes commons-httpclient
Lohit Vijayarenu created HADOOP-9557: Summary: hadoop-client excludes commons-httpclient Key: HADOOP-9557 URL: https://issues.apache.org/jira/browse/HADOOP-9557 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu hadoop-client pom excludes commons-httpclient jar, while this is used while running in pig in local mode and also httpfs clients -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9557) hadoop-client excludes commons-httpclient
[ https://issues.apache.org/jira/browse/HADOOP-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13653216#comment-13653216 ] Lohit Vijayarenu commented on HADOOP-9557: -- One exception seen is {noformat} Caused by: java.lang.ClassNotFoundException: org.apache.commons.httpclient.HttpMethod at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 1 more Exception in thread Thread-3 java.lang.NoClassDefFoundError: org/apache/commons/httpclient/HttpMethod at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:480) Caused by: java.lang.ClassNotFoundException: org.apache.commons.httpclient.HttpMethod at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 1 more {noformat} hadoop-client excludes commons-httpclient - Key: HADOOP-9557 URL: https://issues.apache.org/jira/browse/HADOOP-9557 Project: Hadoop Common Issue Type: Bug Reporter: Lohit Vijayarenu hadoop-client pom excludes commons-httpclient jar, while this is used while running in pig in local mode and also httpfs clients -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9557) hadoop-client excludes commons-httpclient
[ https://issues.apache.org/jira/browse/HADOOP-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9557: - Affects Version/s: 2.0.4-alpha hadoop-client excludes commons-httpclient - Key: HADOOP-9557 URL: https://issues.apache.org/jira/browse/HADOOP-9557 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu hadoop-client pom excludes commons-httpclient jar, while this is used while running in pig in local mode and also httpfs clients -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9557) hadoop-client excludes commons-httpclient
[ https://issues.apache.org/jira/browse/HADOOP-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HADOOP-9557: - Attachment: HADOOP-9557.1.patch Attached patch to remove the exclusion. hadoop-client excludes commons-httpclient - Key: HADOOP-9557 URL: https://issues.apache.org/jira/browse/HADOOP-9557 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.4-alpha Reporter: Lohit Vijayarenu Attachments: HADOOP-9557.1.patch hadoop-client pom excludes commons-httpclient jar, while this is used while running in pig in local mode and also httpfs clients -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7020) establish a Powered by Hadoop logo
[ https://issues.apache.org/jira/browse/HADOOP-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13047283#comment-13047283 ] Lohit Vijayarenu commented on HADOOP-7020: -- Silver circle logo is very nice. Would love to have it as laptop sticker or t-shirt logo :) establish a Powered by Hadoop logo Key: HADOOP-7020 URL: https://issues.apache.org/jira/browse/HADOOP-7020 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: site Reporter: Doug Cutting Assignee: Doug Cutting Fix For: site Attachments: PoweredByHadoop_Small.jpg, hadoop-elephant-pb.jpeg, powered-by-hadoop-small.png, powered-by-hadoop.png We should agree on a Powered By Hadoop logo, as suggested in: http://www.apache.org/foundation/marks/pmcs#poweredby -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira