[jira] Created: (HDFS-1565) Make DataNode respect dfs.datanode.failed.volumes.tolerated at startup
Make DataNode respect dfs.datanode.failed.volumes.tolerated at startup -- Key: HDFS-1565 URL: https://issues.apache.org/jira/browse/HDFS-1565 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Reporter: Jeff Hammerbacher Currently, the DataNode will not start up if a single volume has failed; HDFS-1161 added a configuration parameter to raise this threshold. We should respect this configuration at startup. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-1565) Make DataNode respect dfs.datanode.failed.volumes.tolerated at startup
[ https://issues.apache.org/jira/browse/HDFS-1565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Hammerbacher resolved HDFS-1565. - Resolution: Duplicate Misread Eli's message on the HDFS mailing list; apparently this feature is covered by HDFS-1158 Make DataNode respect dfs.datanode.failed.volumes.tolerated at startup -- Key: HDFS-1565 URL: https://issues.apache.org/jira/browse/HDFS-1565 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Reporter: Jeff Hammerbacher Currently, the DataNode will not start up if a single volume has failed; HDFS-1161 added a configuration parameter to raise this threshold. We should respect this configuration at startup. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HDFS-1506) Refactor fsimage loading code
[ https://issues.apache.org/jira/browse/HDFS-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Hammerbacher updated HDFS-1506: Summary: Refactor fsimage loading code (was: Refator fsimage loading code) Refactor fsimage loading code - Key: HDFS-1506 URL: https://issues.apache.org/jira/browse/HDFS-1506 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Affects Versions: 0.23.0 Reporter: Hairong Kuang Assignee: Hairong Kuang I plan to do some code refactoring to make HDFS-1070 simpler. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1499) mv the namenode NameSpace and BlocksMap to hbase to save the namenode memory
[ https://issues.apache.org/jira/browse/HDFS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12932012#action_12932012 ] Jeff Hammerbacher commented on HDFS-1499: - See https://issues.apache.org/jira/browse/HDFS-1051 for the umbrella JIRA for scaling the NN. There, you'll find a comment that points to http://code.google.com/p/hdfs-dnn, which implements the strategy you suggest. mv the namenode NameSpace and BlocksMap to hbase to save the namenode memory Key: HDFS-1499 URL: https://issues.apache.org/jira/browse/HDFS-1499 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Reporter: dl.brain.ln The NameNode stores all its metadata in the main memory of the machine on which it is deployed. With the file-count and block number growing, namenode machine can't hold anymore files and blocks in its memory and thus restrict the HDFS cluster growth. So many people are talking and thinking abont this problem. Google's next version of GFS use bigtable to store the metadata of the DFS and that seem works. What if we use hbase as the same? In the namenode structure, the namespace of the filesystem and the map of block - datanodes, datanode-blocks which keeped in memory are consume most of the namenode's heap, what if we store those data structure in hbase to decrease the namenode's memory? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1051) Umbrella Jira for Scaling the HDFS Name Service
[ https://issues.apache.org/jira/browse/HDFS-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12930854#action_12930854 ] Jeff Hammerbacher commented on HDFS-1051: - Another path to scalability: BAH's Aaron Cordova has a prototype of using HBase for storing the NN metadata up at http://code.google.com/p/hdfs-dnn/. Umbrella Jira for Scaling the HDFS Name Service --- Key: HDFS-1051 URL: https://issues.apache.org/jira/browse/HDFS-1051 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: 0.22.0 Reporter: Sanjay Radia Assignee: Sanjay Radia The HDFS Name service currently uses a single Namenode which limits its scalability. This is a master jira to track sub-jiras to address this problem. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1454) Update the documentation to reflect true client caching strategy
Update the documentation to reflect true client caching strategy Key: HDFS-1454 URL: https://issues.apache.org/jira/browse/HDFS-1454 Project: Hadoop HDFS Issue Type: Improvement Components: documentation, hdfs client Reporter: Jeff Hammerbacher As noted on the mailing list (http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201010.mbox/%3caanlkti=2csk+ay05btouo-uzv=o4w6ox2pq4nxgpd...@mail.gmail.com%3e), the Staging section of http://hadoop.apache.org/hdfs/docs/r0.21.0/hdfs_design.html#Data+Organization is out of date. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1432) HDFS across data centers: HighTide
[ https://issues.apache.org/jira/browse/HDFS-1432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12920171#action_12920171 ] Jeff Hammerbacher commented on HDFS-1432: - bq. BTW, what is a BCP cluster? http://en.wikipedia.org/wiki/Business_continuity_planning HDFS across data centers: HighTide -- Key: HDFS-1432 URL: https://issues.apache.org/jira/browse/HDFS-1432 Project: Hadoop HDFS Issue Type: Improvement Reporter: dhruba borthakur Assignee: dhruba borthakur There are many instances when the same piece of data resides on multiple HDFS clusters in different data centers. The primary reason being that the physical limitation of one data center is insufficient to host the entire data set. In that case, the administrator(s) typically partition that data into two (or more) HDFS clusters on two different data centers and then duplicates some subset of that data into both the HDFS clusters. In such a situation, there will be six physical copies of data that is duplicated, three copies in one data center and another three copies in another data center. It would be nice if we can keep fewer than 3 replicas on each of the data centers and have the ability to fix a replica in the local data center by copying data from the remote copy in the remote data center. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1448) Create multi-format parser for edits logs file, support binary and XML formats initially
[ https://issues.apache.org/jira/browse/HDFS-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12920172#action_12920172 ] Jeff Hammerbacher commented on HDFS-1448: - Would it be possible to write the edits log as an Avro data file? We'd then get these tools for free. Create multi-format parser for edits logs file, support binary and XML formats initially Key: HDFS-1448 URL: https://issues.apache.org/jira/browse/HDFS-1448 Project: Hadoop HDFS Issue Type: New Feature Components: tools Affects Versions: 0.22.0 Reporter: Erik Steffl Priority: Minor Fix For: 0.22.0 Attachments: editsStored, HDFS-1448-0.22.patch Create multi-format parser for edits logs file, support binary and XML formats initially. Parsing should work from any supported format to any other supported format (e.g. from binary to XML and from XML to binary). The binary format is the format used by FSEditLog class to read/write edits file. Primary reason to develop this tool is to help with troubleshooting, the binary format is hard to read and edit (for human troubleshooters). Longer term it could be used to clean up and minimize parsers for fsimage and edits files. Edits parser OfflineEditsViewer is written in a very similar fashion to OfflineImageViewer. Next step would be to merge OfflineImageViewer and OfflineEditsViewer and use the result in both FSImage and FSEditLog. This is subject to change, specifically depending on adoption of avro (which would completely change how objects are serialized as well as provide ways to convert files to different formats). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1448) Create multi-format parser for edits logs file, support binary and XML formats initially
[ https://issues.apache.org/jira/browse/HDFS-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12920174#action_12920174 ] Jeff Hammerbacher commented on HDFS-1448: - Sorry, saw that you mentioned Avro near the end of the description. Would still be curious to hear what you would need out of Avro to make it a reasonable choice here. Create multi-format parser for edits logs file, support binary and XML formats initially Key: HDFS-1448 URL: https://issues.apache.org/jira/browse/HDFS-1448 Project: Hadoop HDFS Issue Type: New Feature Components: tools Affects Versions: 0.22.0 Reporter: Erik Steffl Priority: Minor Fix For: 0.22.0 Attachments: editsStored, HDFS-1448-0.22.patch Create multi-format parser for edits logs file, support binary and XML formats initially. Parsing should work from any supported format to any other supported format (e.g. from binary to XML and from XML to binary). The binary format is the format used by FSEditLog class to read/write edits file. Primary reason to develop this tool is to help with troubleshooting, the binary format is hard to read and edit (for human troubleshooters). Longer term it could be used to clean up and minimize parsers for fsimage and edits files. Edits parser OfflineEditsViewer is written in a very similar fashion to OfflineImageViewer. Next step would be to merge OfflineImageViewer and OfflineEditsViewer and use the result in both FSImage and FSEditLog. This is subject to change, specifically depending on adoption of avro (which would completely change how objects are serialized as well as provide ways to convert files to different formats). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1435) Provide an option to store fsimage compressed
[ https://issues.apache.org/jira/browse/HDFS-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12918800#action_12918800 ] Jeff Hammerbacher commented on HDFS-1435: - bq. Jeff, I'd like to take a look at the Avro file format. Do you know if Avro file format has any overhead than the current fsimage format? I don't know about the current fsimage format. The Avro format, however, is detailed in the Avro spec: http://avro.apache.org/docs/current/spec.html#Object+Container+Files Provide an option to store fsimage compressed - Key: HDFS-1435 URL: https://issues.apache.org/jira/browse/HDFS-1435 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Affects Versions: 0.22.0 Reporter: Hairong Kuang Assignee: Hairong Kuang Fix For: 0.22.0 Our HDFS has fsimage as big as 20G bytes. It consumes a lot of network bandwidth when secondary NN uploads a new fsimage to primary NN. If we could store fsimage compressed, the problem could be greatly alleviated. I plan to provide a new configuration hdfs.image.compressed with a default value of false. If it is set to be true, fsimage is stored as compressed. The fsimage will have a new layout with a new field compressed in its header, indicating if the namespace is stored compressed or not. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1435) Provide an option to store fsimage compressed
[ https://issues.apache.org/jira/browse/HDFS-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12916795#action_12916795 ] Jeff Hammerbacher commented on HDFS-1435: - Could we use the Avro file format to store the fsimage? We've designed configurable compression into the format, and tools will automatically be available for inspection of the file. Provide an option to store fsimage compressed - Key: HDFS-1435 URL: https://issues.apache.org/jira/browse/HDFS-1435 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Affects Versions: 0.22.0 Reporter: Hairong Kuang Assignee: Hairong Kuang Fix For: 0.22.0 Our HDFS has fsimage as big as 20G bytes. It consumes a lot of network bandwidth when secondary NN uploads a new fsimage to primary NN. If we could store fsimage compressed, the problem could be greatly alleviated. I plan to provide a new configuration hdfs.image.compressed with a default value of false. If it is set to be true, fsimage is stored as compressed. The fsimage will have a new layout with a new field compressed in its header, indicating if the namespace is stored compressed or not. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1367) Add alternative search-provider to HDFS site
[ https://issues.apache.org/jira/browse/HDFS-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12905788#action_12905788 ] Jeff Hammerbacher commented on HDFS-1367: - Hey Alex, I have noticed that you're proposing this alternative search provider for a great deal of ASF projects. Have you contacted the ASF Infrastructure team to see about making this a standard feature of Apache projects, hosted on ASF infrastructure? I've seen similar things happen with continuous integration, code reviews, and other ancillary utilities for ASF projects, and this could be one more benefit of working with the ASF. What do you think? Thanks, Jeff Add alternative search-provider to HDFS site Key: HDFS-1367 URL: https://issues.apache.org/jira/browse/HDFS-1367 Project: Hadoop HDFS Issue Type: Improvement Reporter: Alex Baranau Priority: Minor Attachments: HDFS-1367.patch Use search-hadoop.com service to make available search in HDFS sources, MLs, wiki, etc. This was initially proposed on user mailing list. The search service was already added in site's skin (common for all Hadoop related projects) before so this issue is about enabling it for HDFS. The ultimate goal is to use it at all Hadoop's sub-projects' sites. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1253) bin/hdfs conflicts with common user shortcut
[ https://issues.apache.org/jira/browse/HDFS-1253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12881375#action_12881375 ] Jeff Hammerbacher commented on HDFS-1253: - Hey Allen, Can you point to places on the net where the hdfs alias is used? It hasn't been as common in environments in which I've worked. I'm afraid we're optimizing for a potential use case rather than a real use case. In any case, as you pointed out in IRC, if a user has an alias for hdfs, that will take precedence over their PATH setting, so it's unlikely that they'll get bitten too hard. Thanks, Jeff bin/hdfs conflicts with common user shortcut Key: HDFS-1253 URL: https://issues.apache.org/jira/browse/HDFS-1253 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.21.0 Reporter: Allen Wittenauer Priority: Blocker Fix For: 0.21.0 The 'hdfs' command introduced in 0.21 (unreleased at this time) conflicts with a common user alias and wrapper script. This change should either be reverted or moved from $HADOOP_HOME/bin to somewhere else in $HADOOP_HOME (perhaps sbin?) so that users do not accidentally hit it. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1120) Make DataNode's block-to-device placement policy pluggable
Make DataNode's block-to-device placement policy pluggable -- Key: HDFS-1120 URL: https://issues.apache.org/jira/browse/HDFS-1120 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Reporter: Jeff Hammerbacher As discussed on the mailing list, as the number of disk drives per server increases, it would be useful to allow the DataNode's policy for new block placement to grow in sophistication from the current round-robin strategy. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1121) Allow HDFS client to measure distribution of blocks across devices for a specific DataNode
Allow HDFS client to measure distribution of blocks across devices for a specific DataNode -- Key: HDFS-1121 URL: https://issues.apache.org/jira/browse/HDFS-1121 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Reporter: Jeff Hammerbacher As discussed on the mailing list, it would be useful if the DfsClient could measure the distribution of blocks across devices for an individual DataNode. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1051) Umbrella Jira for Scaling the HDFS Name Service
[ https://issues.apache.org/jira/browse/HDFS-1051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862148#action_12862148 ] Jeff Hammerbacher commented on HDFS-1051: - Another piece of research germane to this JIRA: Haceph: Scalable Metadata Management for Hadoop using Ceph from UCSC (http://www.soe.ucsc.edu/~carlosm/Papers/eestolan-nsdi10-abstract.pdf) Umbrella Jira for Scaling the HDFS Name Service --- Key: HDFS-1051 URL: https://issues.apache.org/jira/browse/HDFS-1051 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: 0.22.0 Reporter: Sanjay Radia Assignee: Sanjay Radia The HDFS Name service currently uses a single Namenode which limits its scalability. This is a master jira to track sub-jiras to address this problem. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1064) NN Availability - umbrella Jira
[ https://issues.apache.org/jira/browse/HDFS-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853433#action_12853433 ] Jeff Hammerbacher commented on HDFS-1064: - There is some documentation on how to use the UpRight library to achieve high availability for HDFS at http://code.google.com/p/upright/wiki/HDFSUpRightOverview NN Availability - umbrella Jira --- Key: HDFS-1064 URL: https://issues.apache.org/jira/browse/HDFS-1064 Project: Hadoop HDFS Issue Type: New Feature Reporter: Sanjay Radia This is an umbrella jira for discussing availability of the HDFS NN and providing references to other Jiras that improve its availability. This includes, but is not limited to, automatic failover. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1064) NN Availability - umbrella Jira
[ https://issues.apache.org/jira/browse/HDFS-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12849490#action_12849490 ] Jeff Hammerbacher commented on HDFS-1064: - Hey, Could someone comment on how the BackupNameNode (BNN) differs from the AvatarNode (AN) that Dhruba is proposing? Thanks, Jeff NN Availability - umbrella Jira --- Key: HDFS-1064 URL: https://issues.apache.org/jira/browse/HDFS-1064 Project: Hadoop HDFS Issue Type: New Feature Reporter: Sanjay Radia This is an umbrella jira for discussing availability of the HDFS NN and providing references to other Jiras that improve its availability. This includes, but is not limited to, automatic failover. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1064) NN Availability - umbrella Jira
[ https://issues.apache.org/jira/browse/HDFS-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12849492#action_12849492 ] Jeff Hammerbacher commented on HDFS-1064: - It's also probably worth linking to some of the DRBD-based work on HA that's been done elsewhere * http://www.scribd.com/doc/20971412/Hadoop-World-Production-Deep-Dive-with-High-Availability * http://www.cloudera.com/blog/2009/07/hadoop-ha-configuration/ NN Availability - umbrella Jira --- Key: HDFS-1064 URL: https://issues.apache.org/jira/browse/HDFS-1064 Project: Hadoop HDFS Issue Type: New Feature Reporter: Sanjay Radia This is an umbrella jira for discussing availability of the HDFS NN and providing references to other Jiras that improve its availability. This includes, but is not limited to, automatic failover. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1064) NN Availability - umbrella Jira
[ https://issues.apache.org/jira/browse/HDFS-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12849495#action_12849495 ] Jeff Hammerbacher commented on HDFS-1064: - Sorry, can't edit the above, but wanted to link to Dhruba's blog posts in case others had missed them: * http://hadoopblog.blogspot.com/2009/11/hdfs-high-availability.html * http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html Also, a research paper: http://portal.acm.org/citation.cfm?id=1651271 NN Availability - umbrella Jira --- Key: HDFS-1064 URL: https://issues.apache.org/jira/browse/HDFS-1064 Project: Hadoop HDFS Issue Type: New Feature Reporter: Sanjay Radia This is an umbrella jira for discussing availability of the HDFS NN and providing references to other Jiras that improve its availability. This includes, but is not limited to, automatic failover. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-1052) HDFS scalability with multiple namenodes
[ https://issues.apache.org/jira/browse/HDFS-1052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12848568#action_12848568 ] Jeff Hammerbacher commented on HDFS-1052: - Ryan: see Sanjay's comment at https://issues.apache.org/jira/browse/HDFS-1051?focusedCommentId=12848235page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12848235 HDFS scalability with multiple namenodes Key: HDFS-1052 URL: https://issues.apache.org/jira/browse/HDFS-1052 Project: Hadoop HDFS Issue Type: New Feature Components: name-node Affects Versions: 0.22.0 Reporter: Suresh Srinivas Assignee: Suresh Srinivas HDFS currently uses a single namenode that limits scalability of the cluster. This jira proposes an architecture to scale the nameservice horizontally using multiple namenodes. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-916) Rewrite DFSOutputStream to use a single thread with NIO
[ https://issues.apache.org/jira/browse/HDFS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12805635#action_12805635 ] Jeff Hammerbacher commented on HDFS-916: +1 Rewrite DFSOutputStream to use a single thread with NIO --- Key: HDFS-916 URL: https://issues.apache.org/jira/browse/HDFS-916 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs client Affects Versions: 0.22.0 Reporter: Todd Lipcon Assignee: Todd Lipcon The DFS write pipeline code has some really hairy multi-threaded synchronization. There have been a lot of bugs produced by this (HDFS-101, HDFS-793, HDFS-915, tens of others) since it's very hard to understand the message passing, lock sharing, and interruption properties. The reason for the multiple threads is to be able to simultaneously send and receive. If instead of using multiple threads, it used nonblocking IO, I think the whole thing would be a lot less error prone. I think we could do this in two halves: one half is the DFSOutputStream. The other half is BlockReceiver. I opened this JIRA first as I think it's simpler (only one TCP connection to deal with, rather than an up and downstream) Opinions? Am I crazy? I would like to see some agreement on the idea before I spend time writing code. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-842) Serialize NN edits log as avro records
[ https://issues.apache.org/jira/browse/HDFS-842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12796408#action_12796408 ] Jeff Hammerbacher commented on HDFS-842: Could possibly be useful for repairing knackered edits logs. Serialize NN edits log as avro records -- Key: HDFS-842 URL: https://issues.apache.org/jira/browse/HDFS-842 Project: Hadoop HDFS Issue Type: New Feature Components: name-node Reporter: Todd Lipcon Right now, the edits log is a mishmash of ad-hoc serialization and Writables. Switching it over to Avro records would be really useful for operator tools - an offline edits viewer would become trivial (avrocat) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HDFS-598) Eclipse launch task for HDFS
[ https://issues.apache.org/jira/browse/HDFS-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Hammerbacher updated HDFS-598: --- Tags: cloudera Eclipse launch task for HDFS Key: HDFS-598 URL: https://issues.apache.org/jira/browse/HDFS-598 Project: Hadoop HDFS Issue Type: Improvement Components: build Environment: Eclipse 3.5 Reporter: Eli Collins Priority: Trivial Attachments: hdfs-598.patch Porting HDFS launch task from HADOOP-5911. See MAPREDUCE-905. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HDFS-506) Incorrect UserName at Solaris because it has no whoami command by default
[ https://issues.apache.org/jira/browse/HDFS-506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12735833#action_12735833 ] Jeff Hammerbacher commented on HDFS-506: I think id -nu is the correct command in Solaris. Incorrect UserName at Solaris because it has no whoami command by default --- Key: HDFS-506 URL: https://issues.apache.org/jira/browse/HDFS-506 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 0.20.1 Environment: OS: SunOS 5.10 Reporter: Urko Benito Original Estimate: 24h Remaining Estimate: 24h Solaris enviroment has no __whoami__ command, so the __getUnixUserName()__ at UnixUserGroupInformation class fails because it's calling to Shell.USER_NAME_COMMAND which is defines as whoami. So it launched an Exception and set the default DrWho username ignoring all the FileSystem permissions. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.