[jira] [Commented] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479658#comment-13479658
 ] 

Aaron T. Myers commented on HDFS-4076:
--

Can I please have a few days to review this code and the updated design doc? 
Thanks.

Also, I assume that this will be checked into the HDFS-2802 branch? If so, I 
think we should create an HDFS-2802 fix version and set that as the 
target/fix version for the sub-tasks of HDFS-2802. If you guys agree, I can 
take care of that.

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4048) Use ERROR instead of INFO for volume failure logs

2012-10-19 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-4048:
--

Attachment: HDFS-4048.patch.trunk.2
HDFS-4048.patch.branch-2.2

Attached HDFS-4048.patch.branch-2.2 and HDFS-4048.patch.trunk.2.

I changed the log level to WARN. I also changed the INFO log in 
FsVolumeList#checkDirs to WARN.

 Use ERROR instead of INFO for volume failure logs
 -

 Key: HDFS-4048
 URL: https://issues.apache.org/jira/browse/HDFS-4048
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Assignee: Stephen Chu
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4048.patch.branch-2, HDFS-4048.patch.branch-2.2, 
 HDFS-4048.patch.trunk, HDFS-4048.patch.trunk.2


 I misconfigured the permissions of the DataNode data directories (they were 
 owned by root, instead of hdfs).
 I wasn't aware of this misconfiguration until a few days later. I usually 
 search through the logs for WARN and ERROR but didn't find messages at these 
 levels that indicated volume failure.
 After more carefully reading the logs, I found:
 {code}
 2012-10-01 13:07:10,440 INFO org.apache.hadoop.hdfs.server.common.Storage: 
 Cannot access storage directory /data/4/dfs/dn
 2012-10-01 13:07:10,440 INFO org.apache.hadoop.hdfs.server.common.Storage: 
 Storage directory /data/4/dfs/dn does not exist.
 {code}
 I think we should bump the log level to ERROR. This will make the problem 
 more visible to users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4076:
--

Affects Version/s: HDFS-2802

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479665#comment-13479665
 ] 

Suresh Srinivas commented on HDFS-4076:
---

bq. If so, I think we should create an HDFS-2802 fix version 
Good idea. Create one and changed this jira appropriately.

As regards to waiting for this code to be checked in, I think this is being 
done incrementally instead of one big jumbo patch. There are other patches 
dependent on this and waiting for couple of days impedes progress. As we did in 
other feature branches if you miss out on reviewing individual patches, you get 
time to review it during merge time. Also please feel free to comment after the 
fact. It will be taken care of.

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479665#comment-13479665
 ] 

Suresh Srinivas edited comment on HDFS-4076 at 10/19/12 6:48 AM:
-

bq. If so, I think we should create an HDFS-2802 fix version 
Good idea. Created a version for that and changed this jira appropriately.

As regards to waiting for this code to be checked in, I think this is being 
done incrementally instead of one big jumbo patch. There are other patches 
dependent on this and waiting for couple of days impedes progress. As we did in 
other feature branches if you miss out on reviewing individual patches, you get 
time to review it during merge time. Also please feel free to comment after the 
fact. It will be taken care of.

  was (Author: sureshms):
bq. If so, I think we should create an HDFS-2802 fix version 
Good idea. Create one and changed this jira appropriately.

As regards to waiting for this code to be checked in, I think this is being 
done incrementally instead of one big jumbo patch. There are other patches 
dependent on this and waiting for couple of days impedes progress. As we did in 
other feature branches if you miss out on reviewing individual patches, you get 
time to review it during merge time. Also please feel free to comment after the 
fact. It will be taken care of.
  
 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4082) Add editlog opcodes for editlog related operations

2012-10-19 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4082:
-

 Summary: Add editlog opcodes for editlog related operations
 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


This jira tracks snapshot related operations and recording them into editlogs, 
fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2012-10-19 Thread Ahad Rana (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479668#comment-13479668
 ] 

Ahad Rana commented on HDFS-4043:
-

Hi Brahma,

Please disregard my last suggestion. Setting dfs.namenode.kerberos.principal or 
dfs.namenode.kerberos.internal.spnego.principal to and explicit principal name 
(instead of a pattern name with _HOST in it) triggers other bugs (see 
HDFS-4081). The bottom line is that it is probably best to set the hostname of 
the namenode to match exactly the name returned via a reverse-dns query 
(getCanonicalName). You are right however, that your problems are a 
manifestation of the same general bug (inconsistent resolution of canonical 
principal name via different code paths). Most definitely, incoming IP based 
connections need to use getCanonicalName to get back a host name that can be 
used to form the proper principal name. Otherwise you will need to probably go 
with IP based principal names ? 

As mentioned above, I have reverted to setting the internal hostname for the 
namenodes/secondary namenodes to exactly match the fully qualified hostname 
returned via reverse-dns. And so far, things seems to be working properly now.  
 

 Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
 principal name.
 

 Key: HDFS-4043
 URL: https://issues.apache.org/jira/browse/HDFS-4043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
 Environment: CDH4U1 on Ubuntu 12.04
Reporter: Ahad Rana
   Original Estimate: 24h
  Remaining Estimate: 24h

 The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
 using the hdfs principal. This method in turn invokes SecurityUtil.login with 
 a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
 This call does not always return the fully qualified host name, and thus 
 causes the namenode to login to fail due to kerberos's inability to find a 
 matching hdfs principal in the hdfs.keytab file. Instead it should use 
 InetAddress.getCanonicalHostName. This is consistent with what is used 
 internally by SecurityUtil.java to login in other services, such as the 
 DataNode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479669#comment-13479669
 ] 

Aaron T. Myers commented on HDFS-4076:
--

bq. Good idea. Created a version for that and changed this jira appropriately.

Thanks, Suresh.

I'd still prefer to wait a few days before checking this in. Folks often ask 
for this and it's usually honored. I'd prefer to not wait until merge time to 
give feedback on this design and implementation, since as we saw recently merge 
time is often so late as to make changes to the design difficult. I haven't 
even had a chance yet to thoroughly review the updated design doc. I hope you 
appreciate my desire to fully understand the intended design before a lot of 
time is invested in implementing it.

I suppose if you're so chomping at the bit to get this checked in to the branch 
that you can't wait until a few days then I can live with that, but waiting for 
a few days would be appreciated.

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479675#comment-13479675
 ] 

Suresh Srinivas commented on HDFS-4076:
---

bq. I suppose if you're so chomping at the bit to get this checked in to the 
branch that you can't wait until a few days then I can live with that, but 
waiting for a few days would be appreciated.
There are couple of jiras pending on this jira. One thing we could do is use 
github so we can make the progress. But that is some thing I am trying to avoid.

bq. I'd prefer to not wait until merge time to give feedback on this design and 
implementation
That is why there is a meeting that Nicholas indicated will happen on the week 
of Oct 29th so the design is presented and discussed. Hopefully you will be 
able to attend this.



 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479680#comment-13479680
 ] 

Aaron T. Myers commented on HDFS-4081:
--

This looks like a duplicate of HDFS-2264 to me. Please see the prior discussion 
there. Ahad, if you agree, let's close this JIRA as a duplicate.

 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2012-10-19 Thread Kevin Lyda (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479681#comment-13479681
 ] 

Kevin Lyda commented on HDFS-1312:
--

Continuing on Eli's comment, modifying the placement policy also fails to 
handle deletions.

I'm currently experiencing this on my cluster where the first datadir is both 
smaller and getting more of the data (for reasons I'm still trying to figure 
out - it might be due to how the machines were configured historically). The 
offline rebalance script sounds like a good first start.

 Re-balance disks within a Datanode
 --

 Key: HDFS-1312
 URL: https://issues.apache.org/jira/browse/HDFS-1312
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Reporter: Travis Crawford

 Filing this issue in response to ``full disk woes`` on hdfs-user.
 Datanodes fill their storage directories unevenly, leading to situations 
 where certain disks are full while others are significantly less used. Users 
 at many different sites have experienced this issue, and HDFS administrators 
 are taking steps like:
 - Manually rebalancing blocks in storage directories
 - Decomissioning nodes  later readding them
 There's a tradeoff between making use of all available spindles, and filling 
 disks at the sameish rate. Possible solutions include:
 - Weighting less-used disks heavier when placing new blocks on the datanode. 
 In write-heavy environments this will still make use of all spindles, 
 equalizing disk use over time.
 - Rebalancing blocks locally. This would help equalize disk use as disks are 
 added/replaced in older cluster nodes.
 Datanodes should actively manage their local disk so operator intervention is 
 not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4002) Tool-ize OfflineImageViewer and make sure it returns proper return codes upon exit

2012-10-19 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-4002:
--

Attachment: HDFS-4002.patch.trunk

Submitted a patch for trunk.

OIV now implements the Tool interface.

I added unit tests to test the command line arguments and verify return codes 
for successful and unsuccessful parse.

I manually tested the OIV using all the processors.

 Tool-ize OfflineImageViewer and make sure it returns proper return codes upon 
 exit
 --

 Key: HDFS-4002
 URL: https://issues.apache.org/jira/browse/HDFS-4002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Stephen Chu
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HDFS-4002.patch.trunk


 We should make OfflineImageViewer structured (code-wise) in the same way as 
 OfflineEditsViewer is. Particularly, OIV must implement the Tool interface, 
 and must return proper exit codes upon success/failure conditions. Right now, 
 it returns 0 in both successful parse and unsuccessful ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4002) Tool-ize OfflineImageViewer and make sure it returns proper return codes upon exit

2012-10-19 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-4002:
--

Fix Version/s: 3.0.0

 Tool-ize OfflineImageViewer and make sure it returns proper return codes upon 
 exit
 --

 Key: HDFS-4002
 URL: https://issues.apache.org/jira/browse/HDFS-4002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Stephen Chu
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HDFS-4002.patch.trunk


 We should make OfflineImageViewer structured (code-wise) in the same way as 
 OfflineEditsViewer is. Particularly, OIV must implement the Tool interface, 
 and must return proper exit codes upon success/failure conditions. Right now, 
 it returns 0 in both successful parse and unsuccessful ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4002) Tool-ize OfflineImageViewer and make sure it returns proper return codes upon exit

2012-10-19 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-4002:
--

Status: Patch Available  (was: Open)

 Tool-ize OfflineImageViewer and make sure it returns proper return codes upon 
 exit
 --

 Key: HDFS-4002
 URL: https://issues.apache.org/jira/browse/HDFS-4002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Stephen Chu
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HDFS-4002.patch.trunk


 We should make OfflineImageViewer structured (code-wise) in the same way as 
 OfflineEditsViewer is. Particularly, OIV must implement the Tool interface, 
 and must return proper exit codes upon success/failure conditions. Right now, 
 it returns 0 in both successful parse and unsuccessful ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2264) NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo annotation

2012-10-19 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479684#comment-13479684
 ] 

Jitendra Nath Pandey commented on HDFS-2264:


Hey Aaron, sorry for taking this long before responding. I think the general 
issue here is that for these protocols, annotation for a single client is too 
restrictive. We should support being able to configure multiple clients, or a 
group.


 NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo 
 annotation
 ---

 Key: HDFS-2264
 URL: https://issues.apache.org/jira/browse/HDFS-2264
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Aaron T. Myers
Assignee: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2264.r1.diff


 The {{@KerberosInfo}} annotation specifies the expected server and client 
 principals for a given protocol in order to look up the correct principal 
 name from the config. The {{NamenodeProtocol}} has the wrong value for the 
 client config key. This wasn't noticed because most setups actually use the 
 same *value* for for both the NN and 2NN principals ({{hdfs/_HOST@REALM}}), 
 in which the {{_HOST}} part gets replaced at run-time. This bug therefore 
 only manifests itself on secure setups which explicitly specify the NN and 
 2NN principals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479685#comment-13479685
 ] 

Jitendra Nath Pandey commented on HDFS-4081:


bq. This looks like a duplicate of HDFS-2264 to me. Please see the prior 
discussion there. Ahad, if you agree, let's close this JIRA as a duplicate.
+1

 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Ahad Rana (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479686#comment-13479686
 ] 

Ahad Rana commented on HDFS-4081:
-

Hi Aaron,

You are right.This is a dupe of HDFS-2264. Do you want to mark it as such
or would you like me to do it ?

Thanks,

Ahad




 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-4081.
--

Resolution: Duplicate

 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479687#comment-13479687
 ] 

Aaron T. Myers commented on HDFS-4081:
--

I've just resolved this as a duplicate.

 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4083:
-

 Summary: Protocol changes for snapshots
 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-2802
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


This jira addresses protobuf .proto definition and java protocol classes and 
translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4083:
--

Attachment: HDFS-4083.patch

Early proto definitions. The patch does not compile yet. Next steps, add java 
class support for the changed interface.

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: HDFS-2802
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4083:
--

Attachment: (was: HDFS-4083.patch)

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: HDFS-2802
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4083:
--

Attachment: HDFS-4083.patch

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: HDFS-2802
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Ahad Rana (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479780#comment-13479780
 ] 

Ahad Rana commented on HDFS-4081:
-

Hi Aaron,

Upon further investigation, I think my bug and HDFS-2264 are reaching
different conclusions as to why clientPrototocl should be represented by a
different config key. I wonder if there are two different bugs surfacing
here ?

Ahad.




 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4083:
--

Attachment: (was: HDFS-4083.patch)

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: HDFS-2802
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4083:
--

Attachment: HDFS-4083.patch

Update

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: HDFS-2802
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4072) On file deletion remove corresponding blocks pending replication

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479886#comment-13479886
 ] 

Suresh Srinivas commented on HDFS-4072:
---

Jing, would this change be needed for branch-1 as well?


 On file deletion remove corresponding blocks pending replication
 

 Key: HDFS-4072
 URL: https://issues.apache.org/jira/browse/HDFS-4072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4072.patch, HDFS-4072.trunk.001.patch, 
 HDFS-4072.trunk.002.patch, HDFS-4072.trunk.003.patch, 
 HDFS-4072.trunk.004.patch, TestPendingAndDelete.java


 Currently when deleting a file, blockManager does not remove records that are 
 corresponding to the file's blocks from pendingRelications. These records can 
 only be removed after timeout (5~10 min).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2012-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479921#comment-13479921
 ] 

Steve Loughran commented on HDFS-1312:
--

@Kevin: loss of a single disk is an event that not only preserves the rest of 
the data on the server, the server keeps going. You get 1-3TB of network 
traffic as the underreplicated data is re-duplicated, but that's all. 

 Re-balance disks within a Datanode
 --

 Key: HDFS-1312
 URL: https://issues.apache.org/jira/browse/HDFS-1312
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Reporter: Travis Crawford

 Filing this issue in response to ``full disk woes`` on hdfs-user.
 Datanodes fill their storage directories unevenly, leading to situations 
 where certain disks are full while others are significantly less used. Users 
 at many different sites have experienced this issue, and HDFS administrators 
 are taking steps like:
 - Manually rebalancing blocks in storage directories
 - Decomissioning nodes  later readding them
 There's a tradeoff between making use of all available spindles, and filling 
 disks at the sameish rate. Possible solutions include:
 - Weighting less-used disks heavier when placing new blocks on the datanode. 
 In write-heavy environments this will still make use of all spindles, 
 equalizing disk use over time.
 - Rebalancing blocks locally. This would help equalize disk use as disks are 
 added/replaced in older cluster nodes.
 Datanodes should actively manage their local disk so operator intervention is 
 not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4002) Tool-ize OfflineImageViewer and make sure it returns proper return codes upon exit

2012-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479896#comment-13479896
 ] 

Hadoop QA commented on HDFS-4002:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549817/HDFS-4002.patch.trunk
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3368//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3368//console

This message is automatically generated.

 Tool-ize OfflineImageViewer and make sure it returns proper return codes upon 
 exit
 --

 Key: HDFS-4002
 URL: https://issues.apache.org/jira/browse/HDFS-4002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Stephen Chu
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HDFS-4002.patch.trunk


 We should make OfflineImageViewer structured (code-wise) in the same way as 
 OfflineEditsViewer is. Particularly, OIV must implement the Tool interface, 
 and must return proper exit codes upon success/failure conditions. Right now, 
 it returns 0 in both successful parse and unsuccessful ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4074) Remove empty constructors for INode

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479933#comment-13479933
 ] 

Hudson commented on HDFS-4074:
--

Integrated in Hadoop-Yarn-trunk #8 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/8/])
HDFS-4074. Remove the unused default constructor from INode.  Contributed 
by Brandon Li (Revision 1399866)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399866
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java


 Remove empty constructors for INode
 ---

 Key: HDFS-4074
 URL: https://issues.apache.org/jira/browse/HDFS-4074
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4074.patch


 Code cleanup: remove empty constructors for INode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4073) Two minor improvements to FSDirectory

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479934#comment-13479934
 ] 

Hudson commented on HDFS-4073:
--

Integrated in Hadoop-Yarn-trunk #8 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/8/])
HDFS-4073. Two minor improvements to FSDirectory.  Contributed by Jing Zhao 
(Revision 1399861)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Two minor improvements to FSDirectory
 -

 Key: HDFS-4073
 URL: https://issues.apache.org/jira/browse/HDFS-4073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 3.0.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Jing Zhao
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4073.trunk.001.patch


 - Add a debug log message to FSDirectory.unprotectedAddFile(..) for the 
 caught IOException.
 - Remove throw UnresolvedLinkException from addToParent(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4072) On file deletion remove corresponding blocks pending replication

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479938#comment-13479938
 ] 

Hudson commented on HDFS-4072:
--

Integrated in Hadoop-Yarn-trunk #8 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/8/])
HDFS-4072. On file deletion remove corresponding blocks pending 
replications. Contributed by Jing Zhao. (Revision 1399965)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399965
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java


 On file deletion remove corresponding blocks pending replication
 

 Key: HDFS-4072
 URL: https://issues.apache.org/jira/browse/HDFS-4072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4072.patch, HDFS-4072.trunk.001.patch, 
 HDFS-4072.trunk.002.patch, HDFS-4072.trunk.003.patch, 
 HDFS-4072.trunk.004.patch, TestPendingAndDelete.java


 Currently when deleting a file, blockManager does not remove records that are 
 corresponding to the file's blocks from pendingRelications. These records can 
 only be removed after timeout (5~10 min).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4053) Increase the default block size

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479939#comment-13479939
 ] 

Hudson commented on HDFS-4053:
--

Integrated in Hadoop-Yarn-trunk #8 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/8/])
HDFS-4053. Increase the default block size. Contributed by Eli Collins 
(Revision 1399908)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399908
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Increase the default block size
 ---

 Key: HDFS-4053
 URL: https://issues.apache.org/jira/browse/HDFS-4053
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 3.0.0

 Attachments: hdfs-4053.txt, hdfs-4053.txt, hdfs-4053.txt


 The default HDFS block size ({{dfs.blocksize}}) has been 64mb forever. 128mb 
 works well in practice on today's hardware configurations, most clusters I 
 work with use it or higher (eg 256mb). Let's bump to 128mb in trunk for v3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2012-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479954#comment-13479954
 ] 

Brahma Reddy Battula commented on HDFS-4043:


[~ahadr]

Let's go ahead and close this JIRA.
{quote}
You are right however, that your problems are a manifestation of the same 
general bug (inconsistent resolution of canonical principal name via different 
code paths). Most definitely, incoming IP based connections need to use 
getCanonicalName to get back a host name that can be used to form the proper 
principal name. Otherwise you will need to probably go with IP based principal 
names ?
{quote}
can we discuss this point in HDF-3980..?

 Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
 principal name.
 

 Key: HDFS-4043
 URL: https://issues.apache.org/jira/browse/HDFS-4043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
 Environment: CDH4U1 on Ubuntu 12.04
Reporter: Ahad Rana
   Original Estimate: 24h
  Remaining Estimate: 24h

 The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
 using the hdfs principal. This method in turn invokes SecurityUtil.login with 
 a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
 This call does not always return the fully qualified host name, and thus 
 causes the namenode to login to fail due to kerberos's inability to find a 
 matching hdfs principal in the hdfs.keytab file. Instead it should use 
 InetAddress.getCanonicalHostName. This is consistent with what is used 
 internally by SecurityUtil.java to login in other services, such as the 
 DataNode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4082) Add editlog opcodes for snapshot related operations

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4082:
--

Summary: Add editlog opcodes for snapshot related operations  (was: Add 
editlog opcodes for editlog related operations)

 Add editlog opcodes for snapshot related operations
 ---

 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4082.patch


 This jira tracks snapshot related operations and recording them into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4082) Add editlog opcodes for snapshot related operations

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479960#comment-13479960
 ] 

Suresh Srinivas commented on HDFS-4082:
---

One minor note - currently Create and Delete snapshot operations look almost 
identical. I want to keep them separate to ensure Create snapshot operation can 
evolve to have more fields. If not, we can refactor it for better code reuse 
later.

 Add editlog opcodes for snapshot related operations
 ---

 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4082.patch


 This jira tracks snapshot related operations and recording them into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4022) Replication not happening for appended block

2012-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479983#comment-13479983
 ] 

Hadoop QA commented on HDFS-4022:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549972/HDFS-4022.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3369//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3369//console

This message is automatically generated.

 Replication not happening for appended block
 

 Key: HDFS-4022
 URL: https://issues.apache.org/jira/browse/HDFS-4022
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: suja s
Assignee: Uma Maheswara Rao G
Priority: Blocker
 Attachments: HDFS-4022.patch, HDFS-4022.patch, HDFS-4022.patch, 
 HDFS-4022.patch


 Block written and finalized
 Later append called. Block GenTS got changed.
 DN side log 
 Can't send invalid block 
 BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738 
 logged continously
 NN side log
 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from 
 DatanodeRegistration(192.xx.xx.xx, 
 storageID=DS-2040532042-192.xx.xx.xx-50010-1348830863443, infoPort=50075, 
 ipcPort=50020, storageInfo=lv=-40;cid=123456;nsid=116596173;c=0): Can't send 
 invalid block 
 BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738 also 
 logged continuosly.
 The block checked for tansfer is the one with old genTS whereas the new block 
 with updated genTS exist in the data dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4073) Two minor improvements to FSDirectory

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479997#comment-13479997
 ] 

Hudson commented on HDFS-4073:
--

Integrated in Hadoop-Hdfs-trunk #1200 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1200/])
HDFS-4073. Two minor improvements to FSDirectory.  Contributed by Jing Zhao 
(Revision 1399861)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Two minor improvements to FSDirectory
 -

 Key: HDFS-4073
 URL: https://issues.apache.org/jira/browse/HDFS-4073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 3.0.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Jing Zhao
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4073.trunk.001.patch


 - Add a debug log message to FSDirectory.unprotectedAddFile(..) for the 
 caught IOException.
 - Remove throw UnresolvedLinkException from addToParent(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4072) On file deletion remove corresponding blocks pending replication

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480001#comment-13480001
 ] 

Hudson commented on HDFS-4072:
--

Integrated in Hadoop-Hdfs-trunk #1200 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1200/])
HDFS-4072. On file deletion remove corresponding blocks pending 
replications. Contributed by Jing Zhao. (Revision 1399965)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399965
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java


 On file deletion remove corresponding blocks pending replication
 

 Key: HDFS-4072
 URL: https://issues.apache.org/jira/browse/HDFS-4072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4072.patch, HDFS-4072.trunk.001.patch, 
 HDFS-4072.trunk.002.patch, HDFS-4072.trunk.003.patch, 
 HDFS-4072.trunk.004.patch, TestPendingAndDelete.java


 Currently when deleting a file, blockManager does not remove records that are 
 corresponding to the file's blocks from pendingRelications. These records can 
 only be removed after timeout (5~10 min).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4053) Increase the default block size

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480002#comment-13480002
 ] 

Hudson commented on HDFS-4053:
--

Integrated in Hadoop-Hdfs-trunk #1200 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1200/])
HDFS-4053. Increase the default block size. Contributed by Eli Collins 
(Revision 1399908)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399908
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Increase the default block size
 ---

 Key: HDFS-4053
 URL: https://issues.apache.org/jira/browse/HDFS-4053
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 3.0.0

 Attachments: hdfs-4053.txt, hdfs-4053.txt, hdfs-4053.txt


 The default HDFS block size ({{dfs.blocksize}}) has been 64mb forever. 128mb 
 works well in practice on today's hardware configurations, most clusters I 
 work with use it or higher (eg 256mb). Let's bump to 128mb in trunk for v3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4073) Two minor improvements to FSDirectory

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480030#comment-13480030
 ] 

Hudson commented on HDFS-4073:
--

Integrated in Hadoop-Mapreduce-trunk #1230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1230/])
HDFS-4073. Two minor improvements to FSDirectory.  Contributed by Jing Zhao 
(Revision 1399861)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399861
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Two minor improvements to FSDirectory
 -

 Key: HDFS-4073
 URL: https://issues.apache.org/jira/browse/HDFS-4073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 3.0.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Jing Zhao
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HDFS-4073.trunk.001.patch


 - Add a debug log message to FSDirectory.unprotectedAddFile(..) for the 
 caught IOException.
 - Remove throw UnresolvedLinkException from addToParent(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4072) On file deletion remove corresponding blocks pending replication

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480034#comment-13480034
 ] 

Hudson commented on HDFS-4072:
--

Integrated in Hadoop-Mapreduce-trunk #1230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1230/])
HDFS-4072. On file deletion remove corresponding blocks pending 
replications. Contributed by Jing Zhao. (Revision 1399965)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399965
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java


 On file deletion remove corresponding blocks pending replication
 

 Key: HDFS-4072
 URL: https://issues.apache.org/jira/browse/HDFS-4072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4072.patch, HDFS-4072.trunk.001.patch, 
 HDFS-4072.trunk.002.patch, HDFS-4072.trunk.003.patch, 
 HDFS-4072.trunk.004.patch, TestPendingAndDelete.java


 Currently when deleting a file, blockManager does not remove records that are 
 corresponding to the file's blocks from pendingRelications. These records can 
 only be removed after timeout (5~10 min).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4053) Increase the default block size

2012-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480035#comment-13480035
 ] 

Hudson commented on HDFS-4053:
--

Integrated in Hadoop-Mapreduce-trunk #1230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1230/])
HDFS-4053. Increase the default block size. Contributed by Eli Collins 
(Revision 1399908)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399908
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Increase the default block size
 ---

 Key: HDFS-4053
 URL: https://issues.apache.org/jira/browse/HDFS-4053
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 3.0.0

 Attachments: hdfs-4053.txt, hdfs-4053.txt, hdfs-4053.txt


 The default HDFS block size ({{dfs.blocksize}}) has been 64mb forever. 128mb 
 works well in practice on today's hardware configurations, most clusters I 
 work with use it or higher (eg 256mb). Let's bump to 128mb in trunk for v3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2012-10-19 Thread Steve Hoffman (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480055#comment-13480055
 ] 

Steve Hoffman commented on HDFS-1312:
-

Given the general nature of HDFS and its many uses (HBase, M/R, etc) as much as 
I'd like it to just work, it is clear it always depends on the use.  Maybe 
one day we won't need a balancer script for disks (or for the cluster).

I'm totally OK with having a machine-level balancer script.  We use the HDFS 
balancer to fix inter-machine imbalances when they crop up (again, for a 
variety of reasons).  It makes sense to have a manual script for intra-machine 
imbalances for people who DO have issues and make it part of the standard 
install (like the HDFS balancer).

 Re-balance disks within a Datanode
 --

 Key: HDFS-1312
 URL: https://issues.apache.org/jira/browse/HDFS-1312
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Reporter: Travis Crawford

 Filing this issue in response to ``full disk woes`` on hdfs-user.
 Datanodes fill their storage directories unevenly, leading to situations 
 where certain disks are full while others are significantly less used. Users 
 at many different sites have experienced this issue, and HDFS administrators 
 are taking steps like:
 - Manually rebalancing blocks in storage directories
 - Decomissioning nodes  later readding them
 There's a tradeoff between making use of all available spindles, and filling 
 disks at the sameish rate. Possible solutions include:
 - Weighting less-used disks heavier when placing new blocks on the datanode. 
 In write-heavy environments this will still make use of all spindles, 
 equalizing disk use over time.
 - Rebalancing blocks locally. This would help equalize disk use as disks are 
 added/replaced in older cluster nodes.
 Datanodes should actively manage their local disk so operator intervention is 
 not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480102#comment-13480102
 ] 

Owen O'Malley commented on HDFS-4081:
-

Ahad,
  I can understand the problem in HDFS-2264, but the NameNode *should* use the 
same principal for both client and server. If there is a context where the 
_HOST isn't being expanded, that is a problem. Is that what you are hitting?

 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4080) Add an option to disable block-level state change logging

2012-10-19 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480139#comment-13480139
 ] 

Aaron T. Myers commented on HDFS-4080:
--

Would it be possible to just move the logging outside of the FSNS lock? Jing 
did a similar change recently in HDFS-4052.

 Add an option to disable block-level state change logging
 -

 Key: HDFS-4080
 URL: https://issues.apache.org/jira/browse/HDFS-4080
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Kihwal Lee

 Although the block-level logging in namenode is useful for debugging, it can 
 add a significant overhead to busy hdfs clusters since they are done while 
 the namespace write lock is held. One example is shown in HDFS-4075. In this 
 example, the write lock was held for 5 minutes while logging 11 million log 
 messages for 5.5 million block invalidation events. 
 It will be useful if we have an option to disable these block-level log 
 messages and keep other state change messages going.  If others feel that 
 they can turned into DEBUG (with addition of isDebugEnabled() checks), that 
 may also work too, but there might be people depending on the messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Ahad Rana (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480142#comment-13480142
 ] 

Ahad Rana commented on HDFS-4081:
-

Hi Owen,

We discovered this problem because in our network the nodes set their
hostname to be a short name, and DNS returns the fully qualified host name
(i.e n01 and n01.region.prod.somecompany.com). The inconsistency arises
because in some cases the NN code uses the Java getHostName call to
retrieve the NN's hostname (returns n01) to form a principal name while
clients connecting to the NN use the Java getCanonicalName call (returns
n01.region) to form principal names. We tried to address this issue by
explicitly setting the NN's principal via *dfs.namenode.kerberos.principal.
*

Unfortunately, the key *dfs.namenode.kerberos.principal *has different
meanings depending on the context in which it is used. In one case, the
Namenode uses it to establish the principal its server principal name. In
the other case, the same Namenode uses the same key to figure the principal
name to use with an incoming (client) connection. I believe the Hadoop
Security docs I have seen recommend that you create a unique (host
qualified) principal per machine (at least the CDH docs recommend this).
So, in this scenario you have different principal names for the NN and the
2NN (as an example). If someone uses the *dfs.namenode.kerberos.principal *key
to set an explicit principal name for the NN, authentication with the 2NN
breaks because the code in ServiceAuthorizationManager is unable to
construct a proper principal name for the 2NN from the explicit name set
for the NN.

Perhaps the short term fix is to better document how to use the *
dfs.namenode.kerberos.principal* config key. If you set its value to be an
explicit principal name, then you have to use the exact same principal name
across all nodes that try to authenticate with the secured NN protocol. If
you are using host qualified principal names for each node in the cluster,
then you must specify a pattern based principal name in
*dfs.namenode.kerberos.principal
*that can be used by the NN to both establish its own principal name and an
incoming client's principal name.

We worked around the issue by changing our NN / 2NN hostnames to match the
fully qualified names returned by DNS. Longer term, I would recommend that
we (a) fix the code in the NN to consistently use getCanonicalName whenever
it tries to use a hostname for the purposes of forming a principal name and
(b) perhaps split *dfs.namenode.kerberos.principal *into *
dfs.namenode.kerberos.principal* and *
dfs.namenode.kerberos.client.principal. *

I apologize for the lengthy answer :-)

Ahad.




 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4028) QJM: Merge newEpoch and prepareRecovery

2012-10-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480150#comment-13480150
 ] 

Sanjay Radia commented on HDFS-4028:


This has been discussed quite a bit in HDFS-3077. The rationale is 
simplification and being the same as a well-proven ZAB protocol. ZAB/Paxos are 
complex and if we stick to proven protocols it makes it easier for folks to 
understand and maintain. Further, in order to make HA rock solid, it makes 
sense to stick to well proven solutions to the degree that those solutions 
apply.

 QJM: Merge newEpoch and prepareRecovery
 ---

 Key: HDFS-4028
 URL: https://issues.apache.org/jira/browse/HDFS-4028
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Reporter: Sanjay Radia
Assignee: Suresh Srinivas
 Fix For: QuorumJournalManager (HDFS-3077)




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4072) On file deletion remove corresponding blocks pending replication

2012-10-19 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4072:


Attachment: HDFS-4072.b1.001.patch

Branch-1 patch. Will run test-patch for it.

 On file deletion remove corresponding blocks pending replication
 

 Key: HDFS-4072
 URL: https://issues.apache.org/jira/browse/HDFS-4072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4072.b1.001.patch, HDFS-4072.patch, 
 HDFS-4072.trunk.001.patch, HDFS-4072.trunk.002.patch, 
 HDFS-4072.trunk.003.patch, HDFS-4072.trunk.004.patch, 
 TestPendingAndDelete.java


 Currently when deleting a file, blockManager does not remove records that are 
 corresponding to the file's blocks from pendingRelications. These records can 
 only be removed after timeout (5~10 min).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them

2012-10-19 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480181#comment-13480181
 ] 

Sanjay Radia commented on HDFS-4025:


Todd, I am fine with making the full vs partial sync configurable if you 
prefer. However I would like to continue the discussion we started in HDFS-3077.
The relevant comments are 

[comment1| 
https://issues.apache.org/jira/browse/HDFS-3077?focusedCommentId=13473384page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13473384]

[comment2 
|https://issues.apache.org/jira/browse/HDFS-3077?focusedCommentId=13473783page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13473783]

 QJM: Sychronize past log segments to JNs that missed them
 -

 Key: HDFS-4025
 URL: https://issues.apache.org/jira/browse/HDFS-4025
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: QuorumJournalManager (HDFS-3077)


 Currently, if a JournalManager crashes and misses some segment of logs, and 
 then comes back, it will be re-added as a valid part of the quorum on the 
 next log roll. However, it will not have a complete history of log segments 
 (i.e any individual JN may have gaps in its transaction history). This 
 mirrors the behavior of the NameNode when there are multiple local 
 directories specified.
 However, it would be better if a background thread noticed these gaps and 
 filled them in by grabbing the segments from other JournalNodes. This 
 increases the resilience of the system when JournalNodes get reformatted or 
 otherwise lose their local disk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2434) TestNameNodeMetrics.testCorruptBlock fails intermittently

2012-10-19 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-2434:


Attachment: HDFS-2434.trunk.005.patch

Update the patch based on the change in HDFS-4072.

 TestNameNodeMetrics.testCorruptBlock fails intermittently
 -

 Key: HDFS-2434
 URL: https://issues.apache.org/jira/browse/HDFS-2434
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Uma Maheswara Rao G
Assignee: Jing Zhao
  Labels: test-fail
 Attachments: HDFS-2434.001.patch, HDFS-2434.002.patch, 
 HDFS-2434.trunk.003.patch, HDFS-2434.trunk.004.patch, 
 HDFS-2434.trunk.005.patch


 java.lang.AssertionError: Bad value for metric CorruptBlocks expected:1 but 
 was:0
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.failNotEquals(Assert.java:645)
   at org.junit.Assert.assertEquals(Assert.java:126)
   at org.junit.Assert.assertEquals(Assert.java:470)
   at 
 org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:185)
   at 
 org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.__CLR3_0_2t8sh531i1k(TestNameNodeMetrics.java:175)
   at 
 org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testCorruptBlock(TestNameNodeMetrics.java:164)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-19 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480205#comment-13480205
 ] 

Colin Patrick McCabe commented on HDFS-2802:


It's good to see that this design carefully considers how to separate the 
metadata of a snapshotted file (or directory) from the metadata of a later 
version of that file.

bq. When there are one or more objects (either the original file or snaplinks) 
under a sub-tree, the occupied space is counted as the max file size times the 
max replication of these object (the max calculations include only the objects 
under the sub-tree but exclude the objects outside the sub-tree.) Note that it 
is easy to determine if a given INode is under a sub-tree by traversing up with 
the parent references.

In some of the most commercially popular systems which implement snapshots, 
snapshots do not count against the disk quotas.  I think system administrators 
might expect this behavior by now.  Some other filesystems have two kinds of 
quotas-- one which counts snapshots, and another which does not.  This could be 
a good way to go.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: snap.patch, snapshot-one-pager.pdf, Snapshots20121018.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4084) provide CLI support for snapshot operations

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4084:
-

Attachment: HDFS-4084.patch

 provide CLI support for snapshot operations
 ---

 Key: HDFS-4084
 URL: https://issues.apache.org/jira/browse/HDFS-4084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node, tools
Affects Versions: HDFS-2802
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4084.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4084) provide CLI support for snapshot operations

2012-10-19 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4084:


 Summary: provide CLI support for snapshot operations
 Key: HDFS-4084
 URL: https://issues.apache.org/jira/browse/HDFS-4084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node, tools
Affects Versions: HDFS-2802
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4084.patch



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4084) provide CLI support for snapshot operations

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4084:
-

Description: To provide CLI support to allow snapshot, disallow snapshot on 
a directory, create/remove/list snapshots.

 provide CLI support for snapshot operations
 ---

 Key: HDFS-4084
 URL: https://issues.apache.org/jira/browse/HDFS-4084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node, tools
Affects Versions: HDFS-2802
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4084.patch


 To provide CLI support to allow snapshot, disallow snapshot on a directory, 
 create/remove/list snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4081) NamenodeProtocol and other Secure Protocols should use different config keys for serverPrincipal and clientPrincipal KerberosInfo components

2012-10-19 Thread Ahad Rana (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480234#comment-13480234
 ] 

Ahad Rana commented on HDFS-4081:
-

Hi Owen,

Upon further thought, perhaps it is best just to fix the canonical name
issue (a) and leave the DFS_NAMENODE_USER_NAME_KEY as it is (a single key).
It seems that the NN and everybody else  (clients) should be able to use
the same consistent principal naming scheme to login as the hdfs user.
Perhaps this was the intention all along. This does beg the question as to
why there is a need for a DFS_SECONDARY_NAMENODE_USER_NAME_KEY, as the 2NN
is basically a client of the NN ? Also, what is your recommended policy
with regards to hdfs principal names ? Should they be host qualified or not
? The host qualified scheme makes a lot of sense when you are distributing
keytabs to each host in the network, but it seems a bit inconvenient that
you cannot mix the two forms, especially in the case of an admin (for
example) that would like to use password auth to get an HDFS TGT for the
purposes of using a tool like DFSAdmin.

Thanks,

Ahad.





 NamenodeProtocol and other Secure Protocols should use different config keys 
 for serverPrincipal and clientPrincipal KerberosInfo components 
 -

 Key: HDFS-4081
 URL: https://issues.apache.org/jira/browse/HDFS-4081
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
Reporter: Ahad Rana

 The Namenode protocol (NamenodeProtocol.java) defines the same config key, 
 dfs.namenode.kerberos.principal, for both ServerPrincipal and ClientPrincipal 
 components of the KerberosInfo data structure. This overloads the meaning of 
 the dfs.namenode.kerberos.principal config key. This key can be used to 
 define the namenode's principal during startup, but in the client case, it is 
 used by ServiceAuthorizationManager.authorize to create a principal name 
 given an incoming client's ip address. If you explicitly set the principal 
 name for the namenode in the Config using this key, it then breaks 
 ServiceAuthorizationManager.authorize, because it expects this same value to 
 contain a Kerberos principal name pattern NOT an explicit name. 
 The solve this issue, the ServerPrincipal and ClientPrincipal components of 
 the NamenodeProtocol should each be assigned unique Config keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3996) Add debug log removed in HDFS-3873 back

2012-10-19 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3996:
--

Fix Version/s: 0.23.5

I pulled this into branch-0.23 too.

 Add debug log removed in HDFS-3873 back
 ---

 Key: HDFS-3996
 URL: https://issues.apache.org/jira/browse/HDFS-3996
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: hdfs-3996.txt


 Per HDFS-3873 let's add the debug log back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4084) provide CLI support for allow and disallow snapshot on a directory

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4084:
-

Description: To provide CLI support to allow snapshot, disallow snapshot on 
a directory.  (was: To provide CLI support to allow snapshot, disallow snapshot 
on a directory, create/remove/list snapshots.)

 provide CLI support for allow and disallow snapshot on a directory
 --

 Key: HDFS-4084
 URL: https://issues.apache.org/jira/browse/HDFS-4084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node, tools
Affects Versions: HDFS-2802
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4084.patch


 To provide CLI support to allow snapshot, disallow snapshot on a directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3483) Better error message when hdfs fsck is run against a ViewFS config

2012-10-19 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3483:
--

Fix Version/s: 0.23.5

I pulled this into branch-0.23 too.

 Better error message when hdfs fsck is run against a ViewFS config
 --

 Key: HDFS-3483
 URL: https://issues.apache.org/jira/browse/HDFS-3483
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Assignee: Stephen Fritz
  Labels: newbie
 Fix For: 2.0.3-alpha, 0.23.5

 Attachments: core-site.xml, HDFS-3483.patch, hdfs-site.xml


 I'm running a HA + secure + federated cluster.
 When I run hdfs fsck /nameservices/ha-nn-uri/, I see the following:
 bash-3.2$ hdfs fsck /nameservices/ha-nn-uri/
 FileSystem is viewfs://oracle/
 DFSck exiting.
 Any path I enter will return the same message.
 Attached are my core-site.xml and hdfs-site.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4085) HttpFs/WebHDFS don't handle multiple forward slashes in beginning of path

2012-10-19 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-4085:
-

 Summary: HttpFs/WebHDFS don't handle multiple forward slashes in 
beginning of path
 Key: HDFS-4085
 URL: https://issues.apache.org/jira/browse/HDFS-4085
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Priority: Minor


I used double forward slashes when specifying the path while using WebHDFS and 
got an invalid path exception:

webhdfs:
{noformat}
[schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
http://cs-10-20-81-73.cloud.cloudera.com:50070/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schuHTTP/1.1
 400 Bad Request
Content-Type: application/json
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: 
hadoop.auth=u=schup=schut=simplee=1350708634267s=ceRpGprrUsPZ2tqcM5Awy2NPOak=;Path=/
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.2)

{RemoteException:{exception:InvalidPathException,javaClassName:org.apache.hadoop.fs.InvalidPathException,message:Invalid
 path name //user/schu/testDir33}}
{noformat}

httpfs:
{noformat}
[schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
http://cs-10-20-81-73.cloud.cloudera.com:14000/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schu;
HTTP/1.1 500 Internal Server Error
Server: Apache-Coyote/1.1
Set-Cookie: 
hadoop.auth=u=schup=schut=simplee=1350708639250s=diEOTnuANr3T1cZx/dCz72BpPAM=;
 Version=1; Path=/
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 19 Oct 2012 18:50:39 GMT
Connection: close

{RemoteException:{message:Permission denied: user=schu, access=WRITE, 
inode=\\/\:hdfs:supergroup:drwxr-xr-x,exception:AccessControlException,javaClassName:org.apache.hadoop.security.AccessControlException}}
[schu@cs-10-20-81-73 hadoop]$ 
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4085) HttpFs/WebHDFS don't handle multiple forward slashes in beginning of path

2012-10-19 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-4085:
--

Description: 
I used double forward slashes when specifying the path while using WebHDFS and 
got an invalid path exception:

webhdfs:
{noformat}
[schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
http://cs-10-20-81-73.cloud.cloudera.com:50070/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schuHTTP/1.1
 400 Bad Request
Content-Type: application/json
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: 
hadoop.auth=u=schup=schut=simplee=1350708634267s=ceRpGprrUsPZ2tqcM5Awy2NPOak=;Path=/
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.2)

{RemoteException:{exception:InvalidPathException,javaClassName:org.apache.hadoop.fs.InvalidPathException,message:Invalid
 path name //user/schu/testDir33}}
{noformat}


When doing the same with httpfs, I got an AccessControlException.
httpfs:
{noformat}
[schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
http://cs-10-20-81-73.cloud.cloudera.com:14000/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schu;
HTTP/1.1 500 Internal Server Error
Server: Apache-Coyote/1.1
Set-Cookie: 
hadoop.auth=u=schup=schut=simplee=1350708639250s=diEOTnuANr3T1cZx/dCz72BpPAM=;
 Version=1; Path=/
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 19 Oct 2012 18:50:39 GMT
Connection: close

{RemoteException:{message:Permission denied: user=schu, access=WRITE, 
inode=\\/\:hdfs:supergroup:drwxr-xr-x,exception:AccessControlException,javaClassName:org.apache.hadoop.security.AccessControlException}}
[schu@cs-10-20-81-73 hadoop]$ 
{noformat}

  was:
I used double forward slashes when specifying the path while using WebHDFS and 
got an invalid path exception:

webhdfs:
{noformat}
[schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
http://cs-10-20-81-73.cloud.cloudera.com:50070/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schuHTTP/1.1
 400 Bad Request
Content-Type: application/json
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Set-Cookie: 
hadoop.auth=u=schup=schut=simplee=1350708634267s=ceRpGprrUsPZ2tqcM5Awy2NPOak=;Path=/
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.2)

{RemoteException:{exception:InvalidPathException,javaClassName:org.apache.hadoop.fs.InvalidPathException,message:Invalid
 path name //user/schu/testDir33}}
{noformat}

httpfs:
{noformat}
[schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
http://cs-10-20-81-73.cloud.cloudera.com:14000/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schu;
HTTP/1.1 500 Internal Server Error
Server: Apache-Coyote/1.1
Set-Cookie: 
hadoop.auth=u=schup=schut=simplee=1350708639250s=diEOTnuANr3T1cZx/dCz72BpPAM=;
 Version=1; Path=/
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 19 Oct 2012 18:50:39 GMT
Connection: close

{RemoteException:{message:Permission denied: user=schu, access=WRITE, 
inode=\\/\:hdfs:supergroup:drwxr-xr-x,exception:AccessControlException,javaClassName:org.apache.hadoop.security.AccessControlException}}
[schu@cs-10-20-81-73 hadoop]$ 
{noformat}


 HttpFs/WebHDFS don't handle multiple forward slashes in beginning of path
 -

 Key: HDFS-4085
 URL: https://issues.apache.org/jira/browse/HDFS-4085
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Priority: Minor

 I used double forward slashes when specifying the path while using WebHDFS 
 and got an invalid path exception:
 webhdfs:
 {noformat}
 [schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
 http://cs-10-20-81-73.cloud.cloudera.com:50070/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schuHTTP/1.1
  400 Bad Request
 Content-Type: application/json
 Expires: Thu, 01-Jan-1970 00:00:00 GMT
 Set-Cookie: 
 hadoop.auth=u=schup=schut=simplee=1350708634267s=ceRpGprrUsPZ2tqcM5Awy2NPOak=;Path=/
 Transfer-Encoding: chunked
 Server: Jetty(6.1.26.cloudera.2)
 {RemoteException:{exception:InvalidPathException,javaClassName:org.apache.hadoop.fs.InvalidPathException,message:Invalid
  path name //user/schu/testDir33}}
 {noformat}
 When doing the same with httpfs, I got an AccessControlException.
 httpfs:
 {noformat}
 [schu@cs-10-20-81-73 hadoop]$ curl -i -X PUT 
 http://cs-10-20-81-73.cloud.cloudera.com:14000/webhdfs/v1//user/schu/testDir33?op=MKDIRSuser.name=schu;
 HTTP/1.1 500 Internal Server Error
 Server: Apache-Coyote/1.1
 Set-Cookie: 
 hadoop.auth=u=schup=schut=simplee=1350708639250s=diEOTnuANr3T1cZx/dCz72BpPAM=;
  Version=1; Path=/
 Content-Type: application/json
 Transfer-Encoding: chunked
 Date: Fri, 19 Oct 2012 18:50:39 GMT
 Connection: close
 {RemoteException:{message:Permission denied: user=schu, access=WRITE, 
 

[jira] [Commented] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480258#comment-13480258
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4076:
--

Hi Aaron, as you noticed, this is going to be committed to the 2802 branch.  So 
please take your time to review the branch later.  Thank you for your future 
review in advance.

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch, h4076_20121019.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4082) Add editlog opcodes for snapshot related operations

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4082:
--

Attachment: HDFS-4082.patch

Updated patch.

 Add editlog opcodes for snapshot related operations
 ---

 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4082.patch, HDFS-4082.patch


 This jira tracks snapshot related operations and recording them into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480265#comment-13480265
 ] 

Suresh Srinivas commented on HDFS-4076:
---

+1 for the patch. Can you please review HDFS-4082.

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4076_20121018.patch, h4076_20121019.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480269#comment-13480269
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2802:
--

 In some of the most commercially popular systems which implement snapshots, 
 snapshots do not count against the disk quotas. ...

Thanks for the comment.  I think the systems you mentioned probably only 
support RO snapshot.  In the design, we also consider RW snapshot so that disk 
quotas have to be counted.  Your suggestion on two kinds of quotas could be a 
good alternative.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: snap.patch, snapshot-one-pager.pdf, Snapshots20121018.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4082) Add editlog opcodes for snapshot related operations

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4082:
-

Hadoop Flags: Reviewed

+1 patch looks good.

 Add editlog opcodes for snapshot related operations
 ---

 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4082.patch, HDFS-4082.patch


 This jira tracks snapshot related operations and recording them into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4022) Replication not happening for appended block

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4022:
-

Target Version/s: 2.0.2-alpha, 3.0.0  (was: 3.0.0, 2.0.2-alpha)
Hadoop Flags: Reviewed

+1 patch looks good.  Thanks for working on this, Vinay.

 Replication not happening for appended block
 

 Key: HDFS-4022
 URL: https://issues.apache.org/jira/browse/HDFS-4022
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: suja s
Assignee: Uma Maheswara Rao G
Priority: Blocker
 Attachments: HDFS-4022.patch, HDFS-4022.patch, HDFS-4022.patch, 
 HDFS-4022.patch


 Block written and finalized
 Later append called. Block GenTS got changed.
 DN side log 
 Can't send invalid block 
 BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738 
 logged continously
 NN side log
 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from 
 DatanodeRegistration(192.xx.xx.xx, 
 storageID=DS-2040532042-192.xx.xx.xx-50010-1348830863443, infoPort=50075, 
 ipcPort=50020, storageInfo=lv=-40;cid=123456;nsid=116596173;c=0): Can't send 
 invalid block 
 BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738 also 
 logged continuosly.
 The block checked for tansfer is the one with old genTS whereas the new block 
 with updated genTS exist in the data dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4076) Support snapshot of single files

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4076.
--

   Resolution: Fixed
Fix Version/s: HDFS-2802
 Hadoop Flags: Reviewed

I have committed this.

 Support snapshot of single files
 

 Key: HDFS-4076
 URL: https://issues.apache.org/jira/browse/HDFS-4076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: HDFS-2802
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: HDFS-2802

 Attachments: h4076_20121018.patch, h4076_20121019.patch


 The snapshot of a file shares the blocks with the original file in order to 
 avoid copying data.  However, the snapshot file has its own metadata so that 
 it could have independent permission, replication, access time, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4083:
--

Attachment: HDFS-4083.patch

Updated patch with java protocol and translator changes.

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch, HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4082) Add editlog opcodes for snapshot related operations

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4082.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Suresh!

 Add editlog opcodes for snapshot related operations
 ---

 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4082.patch, HDFS-4082.patch


 This jira tracks snapshot related operations and recording them into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4082) Add editlog opcodes for snapshot create and delete operations

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4082:
-

Component/s: (was: data-node)
Description: This jira tracks snapshot create/delete operations and 
recording them into editlogs, fsimage and reading them during startup.  (was: 
This jira tracks snapshot related operations and recording them into editlogs, 
fsimage and reading them during startup.)
Summary: Add editlog opcodes for snapshot create and delete operations  
(was: Add editlog opcodes for snapshot related operations)

 Add editlog opcodes for snapshot create and delete operations
 -

 Key: HDFS-4082
 URL: https://issues.apache.org/jira/browse/HDFS-4082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4082.patch, HDFS-4082.patch


 This jira tracks snapshot create/delete operations and recording them into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4086:


 Summary: Add editlog opcodes to allow and disallow snapshots on a 
directory
 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4086:
-

Description: This JIRA is to record allow/disallow snapshot operations on a 
directory into editlogs, fsimage and reading them during startup.

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4086.patch


 This JIRA is to record allow/disallow snapshot operations on a directory into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4086:
-

Attachment: HDFS-4086.patch

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4086.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4084) provide CLI support for allow and disallow snapshot on a directory

2012-10-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480335#comment-13480335
 ] 

Brandon Li commented on HDFS-4084:
--

Good point, Arpit. Will change it and upload a new patch. 

 provide CLI support for allow and disallow snapshot on a directory
 --

 Key: HDFS-4084
 URL: https://issues.apache.org/jira/browse/HDFS-4084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node, tools
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4084.patch


 To provide CLI support to allow snapshot, disallow snapshot on a directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4087) provide a class to save snapshot descriptions

2012-10-19 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4087:


 Summary: provide a class to save snapshot descriptions
 Key: HDFS-4087
 URL: https://issues.apache.org/jira/browse/HDFS-4087
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li


SnapInfo saves information about a snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480354#comment-13480354
 ] 

Suresh Srinivas commented on HDFS-4086:
---

Minor - can you please add a brief javadoc to AllowSnapshotOp and 
DisallowSnapshotOp classes. Also please complete toXml and fromXml methods.

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4086.patch


 This JIRA is to record allow/disallow snapshot operations on a directory into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4088) Remove throw QuotaExceededException an INodeDirectoryWithQuota constructor

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4088:


 Summary: Remove throw QuotaExceededException an 
INodeDirectoryWithQuota constructor
 Key: HDFS-4088
 URL: https://issues.apache.org/jira/browse/HDFS-4088
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


The constructor body does not throw QuotaExceededException.  We should remove 
it from the declaration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4087) provide a class to save snapshot descriptions

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4087:
-

Attachment: HDFS-4087.patch

 provide a class to save snapshot descriptions
 -

 Key: HDFS-4087
 URL: https://issues.apache.org/jira/browse/HDFS-4087
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4087.patch


 SnapInfo saves information about a snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4088) Remove throw QuotaExceededException an INodeDirectoryWithQuota constructor

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4088:
-

Attachment: h4088_20121019.patch

h4088_20121019.patch: removes throws QuotaExceededException and fixes some 
code formatting.

 Remove throw QuotaExceededException an INodeDirectoryWithQuota constructor
 

 Key: HDFS-4088
 URL: https://issues.apache.org/jira/browse/HDFS-4088
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: h4088_20121019.patch


 The constructor body does not throw QuotaExceededException.  We should remove 
 it from the declaration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4088) Remove throw QuotaExceededException an INodeDirectoryWithQuota constructor

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4088:
-

Status: Patch Available  (was: Open)

 Remove throw QuotaExceededException an INodeDirectoryWithQuota constructor
 

 Key: HDFS-4088
 URL: https://issues.apache.org/jira/browse/HDFS-4088
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: h4088_20121019.patch


 The constructor body does not throw QuotaExceededException.  We should remove 
 it from the declaration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4086:
-

Attachment: HDFS-4086.patch

Added javadoc and toXml/fromXml implementation.

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4086.patch, HDFS-4086.patch


 This JIRA is to record allow/disallow snapshot operations on a directory into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480375#comment-13480375
 ] 

Suresh Srinivas commented on HDFS-4086:
---

+1 for the change.

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4086.patch, HDFS-4086.patch


 This JIRA is to record allow/disallow snapshot operations on a directory into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4077) Support snapshottable INodeDirectory

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4077:
-

Attachment: h4077_20121019.patch

h4077_20121019.patch: adds INodeDirectorySnapshottable.

 Support snapshottable INodeDirectory
 

 Key: HDFS-4077
 URL: https://issues.apache.org/jira/browse/HDFS-4077
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4077_20121019.patch


 Allow INodeDirectory to be set to snapshottable INodeDirectory so that 
 snapshots of the directory can be created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4089) SyncBehindWrites uses wrong flags on sync_file_range

2012-10-19 Thread Jan Kunigk (JIRA)
Jan Kunigk created HDFS-4089:


 Summary: SyncBehindWrites uses wrong flags on sync_file_range
 Key: HDFS-4089
 URL: https://issues.apache.org/jira/browse/HDFS-4089
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, ha
Reporter: Jan Kunigk
Priority: Minor


Hi, I stumbled upon this while trying to understand the append design recently. 
I am assuming when SyncBehindWrites is enabled we do indeed want to do a 
complete sync after each write. In that case the implementation seems wrong to 
me.

Here's a comment from the manpage of sync_file_range on the usage of the 
SYNC_FILE_RANGE_WRITE flag in solitude: This is an asynchronous flush-to-disk 
operation. This is not suitable for data integrity operations. I don't know 
why this syscall is invoked here instead of just fsync

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4089) SyncBehindWrites uses wrong flags on sync_file_range

2012-10-19 Thread Jan Kunigk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Kunigk updated HDFS-4089:
-

Attachment: syncBehindWrites.patch

I believe this patch has the appropriate flags to enforce sync semantics with 
the sync_file_range call

 SyncBehindWrites uses wrong flags on sync_file_range
 

 Key: HDFS-4089
 URL: https://issues.apache.org/jira/browse/HDFS-4089
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, ha
Reporter: Jan Kunigk
Priority: Minor
 Attachments: syncBehindWrites.patch


 Hi, I stumbled upon this while trying to understand the append design 
 recently. I am assuming when SyncBehindWrites is enabled we do indeed want to 
 do a complete sync after each write. In that case the implementation seems 
 wrong to me.
 Here's a comment from the manpage of sync_file_range on the usage of the 
 SYNC_FILE_RANGE_WRITE flag in solitude: This is an asynchronous 
 flush-to-disk operation. This is not suitable for data integrity operations. 
 I don't know why this syscall is invoked here instead of just fsync

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4090) getFileChecksum() result incompatible when called against zero-byte files.

2012-10-19 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-4090:


Assignee: Kihwal Lee

 getFileChecksum() result incompatible when called against zero-byte files.
 --

 Key: HDFS-4090
 URL: https://issues.apache.org/jira/browse/HDFS-4090
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.4, 2.0.2-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical

 When getFileChecksum() is called against a zero-byte file, the branch-1 
 client returns MD5MD5CRC32FileChecksum with crcPerBlock=0, bytePerCrc=0 and 
 md5=70bc8f4b72a86921468bf8e8441dce51, whereas a null is returned in trunk.
 The null makes sense since there is no actual block checksums, but this 
 breaks the compatibility when doing distCp and calling getFileChecksum() via 
 webhdfs or hftp.
 This JIRA is to make the client to return the same 'magic' value that the 
 branch-1 and earlier clients return.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4090) getFileChecksum() result incompatible when called against zero-byte files.

2012-10-19 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-4090:


 Summary: getFileChecksum() result incompatible when called against 
zero-byte files.
 Key: HDFS-4090
 URL: https://issues.apache.org/jira/browse/HDFS-4090
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 2.0.2-alpha, 0.23.4
Reporter: Kihwal Lee
Priority: Critical


When getFileChecksum() is called against a zero-byte file, the branch-1 client 
returns MD5MD5CRC32FileChecksum with crcPerBlock=0, bytePerCrc=0 and 
md5=70bc8f4b72a86921468bf8e8441dce51, whereas a null is returned in trunk.

The null makes sense since there is no actual block checksums, but this breaks 
the compatibility when doing distCp and calling getFileChecksum() via webhdfs 
or hftp.

This JIRA is to make the client to return the same 'magic' value that the 
branch-1 and earlier clients return.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4090) getFileChecksum() result incompatible when called against zero-byte files.

2012-10-19 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4090:
-

Attachment: hdfs-4090.patch

 getFileChecksum() result incompatible when called against zero-byte files.
 --

 Key: HDFS-4090
 URL: https://issues.apache.org/jira/browse/HDFS-4090
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.4, 2.0.2-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hdfs-4090.patch


 When getFileChecksum() is called against a zero-byte file, the branch-1 
 client returns MD5MD5CRC32FileChecksum with crcPerBlock=0, bytePerCrc=0 and 
 md5=70bc8f4b72a86921468bf8e8441dce51, whereas a null is returned in trunk.
 The null makes sense since there is no actual block checksums, but this 
 breaks the compatibility when doing distCp and calling getFileChecksum() via 
 webhdfs or hftp.
 This JIRA is to make the client to return the same 'magic' value that the 
 branch-1 and earlier clients return.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4090) getFileChecksum() result incompatible when called against zero-byte files.

2012-10-19 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4090:
-

Status: Patch Available  (was: Open)

 getFileChecksum() result incompatible when called against zero-byte files.
 --

 Key: HDFS-4090
 URL: https://issues.apache.org/jira/browse/HDFS-4090
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 2.0.2-alpha, 0.23.4
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hdfs-4090.patch


 When getFileChecksum() is called against a zero-byte file, the branch-1 
 client returns MD5MD5CRC32FileChecksum with crcPerBlock=0, bytePerCrc=0 and 
 md5=70bc8f4b72a86921468bf8e8441dce51, whereas a null is returned in trunk.
 The null makes sense since there is no actual block checksums, but this 
 breaks the compatibility when doing distCp and calling getFileChecksum() via 
 webhdfs or hftp.
 This JIRA is to make the client to return the same 'magic' value that the 
 branch-1 and earlier clients return.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480420#comment-13480420
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4083:
--

Need to update ClientProtocol.versionID.  Patch looks good other than that.

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch, HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4089) SyncBehindWrites uses wrong flags on sync_file_range

2012-10-19 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480422#comment-13480422
 ] 

Todd Lipcon commented on HDFS-4089:
---

Hi Jan. The purpose of this flag isn't for data integrity -- it's to avoid 
lumpy IO writeback. If you want data integrity you should be using the hsync() 
call after every write.

 SyncBehindWrites uses wrong flags on sync_file_range
 

 Key: HDFS-4089
 URL: https://issues.apache.org/jira/browse/HDFS-4089
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, ha
Reporter: Jan Kunigk
Priority: Minor
 Attachments: syncBehindWrites.patch


 Hi, I stumbled upon this while trying to understand the append design 
 recently. I am assuming when SyncBehindWrites is enabled we do indeed want to 
 do a complete sync after each write. In that case the implementation seems 
 wrong to me.
 Here's a comment from the manpage of sync_file_range on the usage of the 
 SYNC_FILE_RANGE_WRITE flag in solitude: This is an asynchronous 
 flush-to-disk operation. This is not suitable for data integrity operations. 
 I don't know why this syscall is invoked here instead of just fsync

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4072) On file deletion remove corresponding blocks pending replication

2012-10-19 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480427#comment-13480427
 ] 

Jing Zhao commented on HDFS-4072:
-

test-patch result for branch-1 patch:
-1 overall.  
+1 @author.  The patch does not contain any @author tags.
+1 tests included.  The patch appears to include 3 new or modified tests.
+1 javadoc.  The javadoc tool did not generate any warning messages.
+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.
-1 findbugs.  The patch appears to introduce 222 new Findbugs (version 
2.0.1) warnings.


 On file deletion remove corresponding blocks pending replication
 

 Key: HDFS-4072
 URL: https://issues.apache.org/jira/browse/HDFS-4072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HDFS-4072.b1.001.patch, HDFS-4072.patch, 
 HDFS-4072.trunk.001.patch, HDFS-4072.trunk.002.patch, 
 HDFS-4072.trunk.003.patch, HDFS-4072.trunk.004.patch, 
 TestPendingAndDelete.java


 Currently when deleting a file, blockManager does not remove records that are 
 corresponding to the file's blocks from pendingRelications. These records can 
 only be removed after timeout (5~10 min).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4077) Support snapshottable INodeDirectory

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480429#comment-13480429
 ] 

Suresh Srinivas commented on HDFS-4077:
---

Nicholas, some minor nits:
# Set the given directory path to a snapshottable directory - Set the given 
directory as a snapshottable directory would be more clearer I think.
# I also think grouping snapshot related functionality in FSNamesystem into an 
inner class would be better for code organization.
# Directories that can be taken snapshots. could be Directories where taking 
snapshots is allowed.

 Support snapshottable INodeDirectory
 

 Key: HDFS-4077
 URL: https://issues.apache.org/jira/browse/HDFS-4077
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4077_20121019.patch


 Allow INodeDirectory to be set to snapshottable INodeDirectory so that 
 snapshots of the directory can be created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480430#comment-13480430
 ] 

Suresh Srinivas commented on HDFS-4083:
---

bq. Need to update ClientProtocol.versionID. Patch looks good other than that.
We no longer update it because with protobuf changes, the protocols are always 
expected to be wire compatible.

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch, HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4077) Support snapshottable INodeDirectory

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4077:
-

Attachment: h4077_20121019b.patch

h4077_20121019b.patch: updates the javadoc.

For #2, let's move the new method to SnapshotManger in HDFS-4079.

 Support snapshottable INodeDirectory
 

 Key: HDFS-4077
 URL: https://issues.apache.org/jira/browse/HDFS-4077
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4077_20121019b.patch, h4077_20121019.patch


 Allow INodeDirectory to be set to snapshottable INodeDirectory so that 
 snapshots of the directory can be created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4083) Protocol changes for snapshots

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480445#comment-13480445
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4083:
--

You are right.

+1 on the patch.

 Protocol changes for snapshots
 --

 Key: HDFS-4083
 URL: https://issues.apache.org/jira/browse/HDFS-4083
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-4083.patch, HDFS-4083.patch


 This jira addresses protobuf .proto definition and java protocol classes and 
 translation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-4086.
---

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I committed the patch to HDFS-2082 branch.

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4086.patch, HDFS-4086.patch


 This JIRA is to record allow/disallow snapshot operations on a directory into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4091) Add snapshot quota to limit the number of snapshots

2012-10-19 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4091:


 Summary: Add snapshot quota to limit the number of snapshots
 Key: HDFS-4091
 URL: https://issues.apache.org/jira/browse/HDFS-4091
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


For each snapshottable directory, add a quote to limit the number of snapshots 
of the directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4086) Add editlog opcodes to allow and disallow snapshots on a directory

2012-10-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4086:
--

Hadoop Flags: Reviewed

 Add editlog opcodes to allow and disallow snapshots on a directory
 --

 Key: HDFS-4086
 URL: https://issues.apache.org/jira/browse/HDFS-4086
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: Snapshot (HDFS-2802)
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4086.patch, HDFS-4086.patch


 This JIRA is to record allow/disallow snapshot operations on a directory into 
 editlogs, fsimage and reading them during startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >