[jira] [Updated] (HDFS-3074) HDFS ignores group of a user when creating a file or a directory, and instead inherits

2012-03-10 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-3074:
--

Description: 
When creating a file or making a directory on HDFS, the namesystem calls pass 
{{null}} for the group name, thereby having the parent directory permissions 
inherited onto the file.

This is not how the Linux FS works at least.

For instance, if I have today a user 'foo' with default group 'foo', and I have 
my HDFS home dir created as "foo:foo" by the HDFS admin, all files I create 
under my directory too will have "foo" as group unless I chgrp them myself. 
This makes sense.

Now, if my admin were to change my local accounts' default/primary group to 
'bar' (but did not change so on my homedir on HDFS, and I were to continue 
writing files to my home directory or any subdirectory that has 'foo' as group, 
all files still get created with group 'foo' - as if the NN has not realized 
the primary group of the mapped shell account has already changed.

On linux this is the opposite. My login session's current primary group is what 
determines the default group on my created files and directories, not the 
parent dir owner.

If the create and mkdirs call passed UGI's group info 
(UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
group?) along into their calls instead of a null in the PermissionsStatus 
object, perhaps this can be avoided.

Or should we leave this as-is, and instead state that if admins wish their 
default groups of users to change, they'd have to chgrp all the directories 
themselves?

  was:
When creating a file or making a directory on HDFS, the namesystem calls pass 
{{null}} for the group name, thereby having the parent directory permissions 
inherited onto the file.

This is not how the Linux FS works at least.

For instance, if I have today a user 'foo' with default group 'foo', and I have 
my HDFS home dir created as "foo:foo" by the HDFS admin, all files I create 
under my directory too will have "foo" as group unless I chgrp them myself. 
This makes sense.

Now, if my admin were to change my local accounts' default/primary group to 
'bar' (but did not change so, and I were to continue writing files to my home 
directory (or any subdirectory that has 'foo' as group), all files still get 
created with group 'foo' - as if the NN has not realized the primary group has 
changed.

On linux this is the opposite. My login session's current primary group is what 
determines the default group on my created files and directories, not the 
parent dir owner.

If the create and mkdirs call passed UGI's group info 
(UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
group?) along into their calls instead of a null in the PermissionsStatus 
object, perhaps this can be avoided.

Or should we leave this as-is, and instead state that if admins wish their 
default groups of users to change, they'd have to chgrp all the directories 
themselves?


> HDFS ignores group of a user when creating a file or a directory, and instead 
> inherits
> --
>
> Key: HDFS-3074
> URL: https://issues.apache.org/jira/browse/HDFS-3074
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.1
>Reporter: Harsh J
>Priority: Minor
>
> When creating a file or making a directory on HDFS, the namesystem calls pass 
> {{null}} for the group name, thereby having the parent directory permissions 
> inherited onto the file.
> This is not how the Linux FS works at least.
> For instance, if I have today a user 'foo' with default group 'foo', and I 
> have my HDFS home dir created as "foo:foo" by the HDFS admin, all files I 
> create under my directory too will have "foo" as group unless I chgrp them 
> myself. This makes sense.
> Now, if my admin were to change my local accounts' default/primary group to 
> 'bar' (but did not change so on my homedir on HDFS, and I were to continue 
> writing files to my home directory or any subdirectory that has 'foo' as 
> group, all files still get created with group 'foo' - as if the NN has not 
> realized the primary group of the mapped shell account has already changed.
> On linux this is the opposite. My login session's current primary group is 
> what determines the default group on my created files and directories, not 
> the parent dir owner.
> If the create and mkdirs call passed UGI's group info 
> (UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
> group?) along into their calls instead of a null in the PermissionsStatus 
> object, perhaps this can be avoided.
> Or should we leave this as-is, and instead state that if admins wish their 
> default groups of users to change, they'd have to chgrp all the dire

[jira] [Created] (HDFS-3074) HDFS ignores group of a user when creating a file or a directory, and instead inherits

2012-03-10 Thread Harsh J (Created) (JIRA)
HDFS ignores group of a user when creating a file or a directory, and instead 
inherits
--

 Key: HDFS-3074
 URL: https://issues.apache.org/jira/browse/HDFS-3074
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.1
Reporter: Harsh J
Priority: Minor


When creating a file or making a directory on HDFS, the namesystem calls pass 
{{null}} for the group name, thereby having the parent directory permissions 
inherited onto the file.

This is not how the Linux FS works at least.

For instance, if I have today a user 'foo' with default group 'foo', and I have 
my HDFS home dir created as "foo:foo" by the HDFS admin, all files I create 
under my directory too will have "foo" as group unless I chgrp them myself. 
This makes sense.

Now, if my admin were to change my local accounts' default/primary group to 
'bar' (but did not change so, and I were to continue writing files to my home 
directory (or any subdirectory that has 'foo' as group), all files still get 
created with group 'foo' - as if the NN has not realized the primary group has 
changed.

On linux this is the opposite. My login session's current primary group is what 
determines the default group on my created files and directories, not the 
parent dir owner.

If the create and mkdirs call passed UGI's group info 
(UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
group?) along into their calls instead of a null in the PermissionsStatus 
object, perhaps this can be avoided.

Or should we leave this as-is, and instead state that if admins wish their 
default groups of users to change, they'd have to chgrp all the directories 
themselves?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3005) ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)

2012-03-10 Thread VINAYAKUMAR B (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

VINAYAKUMAR B updated HDFS-3005:


Attachment: HDFS-3005.patch

> ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)
> 
>
> Key: HDFS-3005
> URL: https://issues.apache.org/jira/browse/HDFS-3005
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.24.0
>Reporter: Tsz Wo (Nicholas), SZE
> Attachments: HDFS-3005.patch
>
>
> Saw this in [build 
> #1888|https://builds.apache.org/job/PreCommit-HDFS-Build/1888//testReport/org.apache.hadoop.hdfs.server.datanode/TestMulitipleNNDataBlockScanner/testBlockScannerAfterRestart/].
> {noformat}
> java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:834)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:832)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.getDfsUsed(FSDataset.java:557)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.getDfsUsed(FSDataset.java:809)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.access$1400(FSDataset.java:774)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getDfsUsed(FSDataset.java:1124)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.sendHeartBeat(BPOfferService.java:406)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.offerService(BPOfferService.java:490)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.run(BPOfferService.java:635)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3005) ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)

2012-03-10 Thread VINAYAKUMAR B (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

VINAYAKUMAR B updated HDFS-3005:


Attachment: (was: HDFS-3005.patch)

> ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)
> 
>
> Key: HDFS-3005
> URL: https://issues.apache.org/jira/browse/HDFS-3005
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.24.0
>Reporter: Tsz Wo (Nicholas), SZE
> Attachments: HDFS-3005.patch
>
>
> Saw this in [build 
> #1888|https://builds.apache.org/job/PreCommit-HDFS-Build/1888//testReport/org.apache.hadoop.hdfs.server.datanode/TestMulitipleNNDataBlockScanner/testBlockScannerAfterRestart/].
> {noformat}
> java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:834)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:832)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.getDfsUsed(FSDataset.java:557)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.getDfsUsed(FSDataset.java:809)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.access$1400(FSDataset.java:774)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getDfsUsed(FSDataset.java:1124)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.sendHeartBeat(BPOfferService.java:406)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.offerService(BPOfferService.java:490)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.run(BPOfferService.java:635)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2515) start-all.sh namenode createSocketAddr

2012-03-10 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226992#comment-13226992
 ] 

Harsh J commented on HDFS-2515:
---

Hi,

This was qualified to be a user issue and was closed out. The JIRA project is 
for Hadoop development and bug reports. For user-queries, please send a mail 
out to the mailing lists instead (hdfs-user at hadoop.apache.org).

Thanks :)

> start-all.sh namenode createSocketAddr
> --
>
> Key: HDFS-2515
> URL: https://issues.apache.org/jira/browse/HDFS-2515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.205.0
> Environment: centos
>Reporter: 程国强
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> 2011-10-28 10:52:00,083 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: 
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:184)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:198)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:262)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:497)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2515) start-all.sh namenode createSocketAddr

2012-03-10 Thread Eduardo de Vera (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226950#comment-13226950
 ] 

Eduardo de Vera commented on HDFS-2515:
---

Same problem here, according to the documentation it should work on standalone 
mode in case core-site.xml, mapred-site.xml and hdfs-site.xml only contain an 
empty configuration element. Once added the pseudo-distributed mode 
configuration values I am able to start the system.

> start-all.sh namenode createSocketAddr
> --
>
> Key: HDFS-2515
> URL: https://issues.apache.org/jira/browse/HDFS-2515
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.205.0
> Environment: centos
>Reporter: 程国强
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> 2011-10-28 10:52:00,083 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: 
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:184)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:198)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:262)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:497)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2025) Go Back to File View link is not working in tail.jsp

2012-03-10 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226929#comment-13226929
 ] 

Uma Maheswara Rao G commented on HDFS-2025:
---

Sravan, could you please regenreate the patch based on trunk? please generate 
it from root.

Review comments on test part.
Below lines of code almost deplicate with the assertion part of 'Go Back to 
File View'
{code}
Matcher matcher = compile.matcher(viewFilePage);
+URL hyperlink = null;
+if (matcher.find()) {
+  // got hyperlink for Tail this file
+  hyperlink = new URL(matcher.group(1));
+  viewFilePage = StringEscapeUtils.unescapeHtml(DFSTestUtil
+  .urlGet(hyperlink));
+  assertTrue("page should show preview of file contents", viewFilePage
+  .contains(FILE_DATA));

{code}

I would suggest, that extract it to separate methods and reuse.

> Go Back to File View link is not working in tail.jsp
> 
>
> Key: HDFS-2025
> URL: https://issues.apache.org/jira/browse/HDFS-2025
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: sravankorumilli
>Assignee: sravankorumilli
>Priority: Minor
> Attachments: HDFS-2025.patch, HDFS-2025_1.patch, HDFS-2025_2.patch, 
> HDFS-2025_3.patch, HDFS-2025_4.patch, ScreenShot_1.jpg
>
>
> While browsing the file system.
> Click on any file link to go to the page where the file contents are 
> displayed, then when we click on '*Tail this file*' link.
> The control will go to the tail.jsp here when we
> Click on '*Go Back to File View*' option.
> HTTP Error page not found will come.
> This is because the referrer URL is encoded and the encoded URL is itself 
> being used in the '*Go Back to File View*' hyperlink which will be treated as 
> a relative URL and thus the HTTP request will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-107) Data-nodes should be formatted when the name-node is formatted.

2012-03-10 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226926#comment-13226926
 ] 

Uma Maheswara Rao G commented on HDFS-107:
--

I agree with Konstantin. Allowing auto format may be dangerous.
I am ok with Option(1) as part of this JIRA, it gives an appropriate automation 
of manual removal of storage directories.

> Data-nodes should be formatted when the name-node is formatted.
> ---
>
> Key: HDFS-107
> URL: https://issues.apache.org/jira/browse/HDFS-107
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Konstantin Shvachko
> Attachments: HDFS-107-1.patch
>
>
> The upgrade feature HADOOP-702 requires data-nodes to store persistently the 
> namespaceID 
> in their version files and verify during startup that it matches the one 
> stored on the name-node.
> When the name-node reformats it generates a new namespaceID.
> Now if the cluster starts with the reformatted name-node, and not reformatted 
> data-nodes
> the data-nodes will fail with
> java.io.IOException: Incompatible namespaceIDs ...
> Data-nodes should be reformatted whenever the name-node is. I see 2 
> approaches here:
> 1) In order to reformat the cluster we call "start-dfs -format" or make a 
> special script "format-dfs".
> This would format the cluster components all together. The question is 
> whether it should start
> the cluster after formatting?
> 2) Format the name-node only. When data-nodes connect to the name-node it 
> will tell them to
> format their storage directories if it sees that the namespace is empty and 
> its cTime=0.
> The drawback of this approach is that we can loose blocks of a data-node from 
> another cluster
> if it connects by mistake to the empty name-node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-107) Data-nodes should be formatted when the name-node is formatted.

2012-03-10 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-107:
-

Status: Open  (was: Patch Available)

> Data-nodes should be formatted when the name-node is formatted.
> ---
>
> Key: HDFS-107
> URL: https://issues.apache.org/jira/browse/HDFS-107
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Konstantin Shvachko
> Attachments: HDFS-107-1.patch
>
>
> The upgrade feature HADOOP-702 requires data-nodes to store persistently the 
> namespaceID 
> in their version files and verify during startup that it matches the one 
> stored on the name-node.
> When the name-node reformats it generates a new namespaceID.
> Now if the cluster starts with the reformatted name-node, and not reformatted 
> data-nodes
> the data-nodes will fail with
> java.io.IOException: Incompatible namespaceIDs ...
> Data-nodes should be reformatted whenever the name-node is. I see 2 
> approaches here:
> 1) In order to reformat the cluster we call "start-dfs -format" or make a 
> special script "format-dfs".
> This would format the cluster components all together. The question is 
> whether it should start
> the cluster after formatting?
> 2) Format the name-node only. When data-nodes connect to the name-node it 
> will tell them to
> format their storage directories if it sees that the namespace is empty and 
> its cTime=0.
> The drawback of this approach is that we can loose blocks of a data-node from 
> another cluster
> if it connects by mistake to the empty name-node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1805) Some Tests in TestDFSShell can not shutdown the MiniDFSCluster on any exception/assertion failure. This will leads to fail other testcases.

2012-03-10 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-1805:
--

Status: Open  (was: Patch Available)

> Some Tests in TestDFSShell can not shutdown the MiniDFSCluster on any 
> exception/assertion failure. This will leads to fail other testcases.
> ---
>
> Key: HDFS-1805
> URL: https://issues.apache.org/jira/browse/HDFS-1805
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.23.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-1805-1.patch, HDFS-1805-2.patch, HDFS-1805-3.patch, 
> HDFS-1805.patch
>
>
> Some test cases in TestDFSShell are not shutting down the MiniDFSCluster in 
> finally.
> If any test assertion failure or exception can result in not shutting down 
> this cluster. Because of this other testcases will fail. This will create 
> difficulty in finding the actual testcase failures.
> So, better to shutdown the cluster in finally. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1512) BlockSender calls deprecated method getReplica

2012-03-10 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-1512:
--

Status: Open  (was: Patch Available)

Canceled the patch as the comments need to be addressed.

> BlockSender calls deprecated method getReplica
> --
>
> Key: HDFS-1512
> URL: https://issues.apache.org/jira/browse/HDFS-1512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Amin Bandeali
>  Labels: newbie
> Attachments: HDFS-1512.patch, HDFS-1512.patch
>
>
> HDFS-680 deprecated FSDatasetInterface#getReplica, however it is still used 
> by BlockSender which still maintains a Replica member.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2298) TestDfsOverAvroRpc is failing on trunk

2012-03-10 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-2298:
--

  Resolution: Won't Fix
Target Version/s: 0.23.0, 0.24.0  (was: 0.24.0, 0.23.0)
  Status: Resolved  (was: Patch Available)

Since we removed the Avro related code now, resolving it as won't fix. Feel 
free to reopen this if there any other expectation.

> TestDfsOverAvroRpc is failing on trunk
> --
>
> Key: HDFS-2298
> URL: https://issues.apache.org/jira/browse/HDFS-2298
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Aaron T. Myers
>Assignee: Doug Cutting
> Fix For: 0.24.0
>
> Attachments: HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch, 
> HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch
>
>
> The relevant bit of the error:
> {noformat}
> ---
> Test set: org.apache.hadoop.hdfs.TestDfsOverAvroRpc
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.486 sec <<< 
> FAILURE!
> testWorkingDirectory(org.apache.hadoop.hdfs.TestDfsOverAvroRpc)  Time 
> elapsed: 1.424 sec  <<< ERROR!
> org.apache.avro.AvroTypeException: Two methods with same name: delete
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1362) Provide volume management functionality for DataNode

2012-03-10 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-1362:
--

Target Version/s: 0.24.0
  Status: Open  (was: Patch Available)

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Wang Xu
>Assignee: Wang Xu
> Fix For: 0.24.0
>
> Attachments: DataNode Volume Refreshment in HDFS-1362.pdf, 
> HDFS-1362.4_w7001.txt, HDFS-1362.5.patch, HDFS-1362.6.patch, 
> HDFS-1362.7.patch, HDFS-1362.8.patch, HDFS-1362.txt, 
> Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1362) Provide volume management functionality for DataNode

2012-03-10 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226914#comment-13226914
 ] 

Uma Maheswara Rao G commented on HDFS-1362:
---

Hi Wang Xu,
 Are you planning to re-base the patch based on trunk?

Canceling the patch as it no more applies to the trunk.

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Wang Xu
>Assignee: Wang Xu
> Fix For: 0.24.0
>
> Attachments: DataNode Volume Refreshment in HDFS-1362.pdf, 
> HDFS-1362.4_w7001.txt, HDFS-1362.5.patch, HDFS-1362.6.patch, 
> HDFS-1362.7.patch, HDFS-1362.8.patch, HDFS-1362.txt, 
> Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1477) Make NameNode Reconfigurable.

2012-03-10 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-1477:
--

Target Version/s: 0.24.0
  Status: Open  (was: Patch Available)

Patrick, do you mind to re-base the patch based on trunk?
Just canceled the patch, as it no more applies to the trunk. 

> Make NameNode Reconfigurable.
> -
>
> Key: HDFS-1477
> URL: https://issues.apache.org/jira/browse/HDFS-1477
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Patrick Kling
>Assignee: Patrick Kling
> Fix For: 0.24.0
>
> Attachments: HDFS-1477.2.patch, HDFS-1477.3.patch, HDFS-1477.patch
>
>
> Modify NameNode to implement the interface Reconfigurable proposed in 
> HADOOP-7001. This would allow us to change certain configuration properties 
> without restarting the name node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1512) BlockSender calls deprecated method getReplica

2012-03-10 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226889#comment-13226889
 ] 

Uma Maheswara Rao G commented on HDFS-1512:
---

@Amin, Tests are failing with your patch change.
Do you want to take a look?
Also please incorporate the comments given for this patch in your next version 
of the patch.
Thanks Nicholas for taking a look

> BlockSender calls deprecated method getReplica
> --
>
> Key: HDFS-1512
> URL: https://issues.apache.org/jira/browse/HDFS-1512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Amin Bandeali
>  Labels: newbie
> Attachments: HDFS-1512.patch, HDFS-1512.patch
>
>
> HDFS-680 deprecated FSDatasetInterface#getReplica, however it is still used 
> by BlockSender which still maintains a Replica member.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2966) TestNameNodeMetrics tests can fail under load

2012-03-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226857#comment-13226857
 ] 

Hudson commented on HDFS-2966:
--

Integrated in Hadoop-Mapreduce-trunk #1015 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1015/])
HDFS-2966 (Revision 1298820)

 Result = SUCCESS
stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1298820
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java


> TestNameNodeMetrics tests can fail under load
> -
>
> Key: HDFS-2966
> URL: https://issues.apache.org/jira/browse/HDFS-2966
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
> Environment: OS/X running intellij IDEA, firefox, winxp in a 
> virtualbox.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2966.patch, HDFS-2966.patch, HDFS-2966.patch, 
> HDFS-2966.patch
>
>
> I've managed to recreate HDFS-540 and HDFS-2434 by the simple technique of 
> running the HDFS tests on a desktop with out enough memory for all the 
> programs trying to run. Things got swapped out and the tests failed as the DN 
> heartbeats didn't come in on time.
> the tests both rely on {{waitForDeletion()}} to block the tests until the 
> delete operation has completed, but all it does is sleep for the same number 
> of seconds as there are datanodes. This is too brittle -it may work on a 
> lightly-loaded system, but not on a system under heavy load where it is 
> taking longer to replicate than expect.
> Immediate fix: double, triple, the sleep time?
> Better fix: have the thread block until all the DN heartbeats have finished.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3056) Add an interface for DataBlockScanner logging

2012-03-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226854#comment-13226854
 ] 

Hudson commented on HDFS-3056:
--

Integrated in Hadoop-Mapreduce-trunk #1015 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1015/])
HDFS-3056: add the new file for the previous commit. (Revision 1299144)
HDFS-3056.  Add a new interface RollingLogs for DataBlockScanner logging. 
(Revision 1299139)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299144
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RollingLogs.java

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299139
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


> Add an interface for DataBlockScanner logging
> -
>
> Key: HDFS-3056
> URL: https://issues.apache.org/jira/browse/HDFS-3056
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.24.0, 0.23.3
>
> Attachments: h3056_20120306.patch, h3056_20120307.patch, 
> h3056_20120307b.patch
>
>
> Some methods in the FSDatasetInterface are used only for logging in 
> DataBlockScanner.  These methods should be separated out to an new interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2849) Improved usability around node decommissioning and block replication on dfshealth.jsp

2012-03-10 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226850#comment-13226850
 ] 

Harsh J commented on HDFS-2849:
---

The centric trouble here comes from the fact that some ops care about the 
global under replicated count, which is fair as there isn't a granular option. 
This is a finding-signal-in-noise issue. We'll need to divide metrics into ones 
they can then choose among to care about.

If there aren't metrics today for decommissioning-blocks count, we can add them 
in and those who wish to continue to monitor global under-replicated count can 
subtract the decommissioning-pending block counts off of it and be done?

> Improved usability around node decommissioning and block replication on 
> dfshealth.jsp
> -
>
> Key: HDFS-2849
> URL: https://issues.apache.org/jira/browse/HDFS-2849
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: documentation, name-node
>Affects Versions: 0.20.2
>Reporter: Jeff Bean
>
> When you do this:
> - Decom a single node.
> - Underreplicated count reports all blocks.
> - Stop decom.
> - Underreplication count reduces slowly and heads to 0.
> This is expected behavior of HDFS but while this is happening, utilities like 
> dfshealth.jsp and fsck produce high numbers of underreplicated blocks, and 
> the node is not on the dead/decommissioned nodes list. It's therefore unclear 
> to novice administrators and HDFS newbies whether or not this is a failure 
> condition that needs administrative attention. 
> Administrators find themselves constantly having to explain the 
> under-replication number when they could be doing better things with their 
> time. And they're constantly getting alarms which can be disregarded, raising 
> fears of a "cry wolf" problem that the real issue gets lost in the noise.
> A direct quote from such an administrator:
> "When a datanode fails, it's not considered a 'decommissioning', so it does 
> not show up in that list, it just simply kicks on the underrep and we have to 
> hunt through the LIVE list and attempt to find out which node caused the 
> issue. Obviously, we (the community) are not being told on the DEAD list when 
> a node appears (why this information has to be withheld has always been an 
> issue with me, how hard is it to put a date field in the DEAD list?)"
> Nevertheless, we should have more information about a dying node instead of 
> seeing a jump in the underrep count from 0 to millions with no real obvious 
> reason. Perhaps add another column saying 'DYING NODE', anything would help.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3056) Add an interface for DataBlockScanner logging

2012-03-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226843#comment-13226843
 ] 

Hudson commented on HDFS-3056:
--

Integrated in Hadoop-Mapreduce-0.23-Build #221 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/221/])
Merge r1299139 and r1299144 from trunk for HDFS-3056. (Revision 1299146)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299146
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RollingLogs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


> Add an interface for DataBlockScanner logging
> -
>
> Key: HDFS-3056
> URL: https://issues.apache.org/jira/browse/HDFS-3056
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.24.0, 0.23.3
>
> Attachments: h3056_20120306.patch, h3056_20120307.patch, 
> h3056_20120307b.patch
>
>
> Some methods in the FSDatasetInterface are used only for logging in 
> DataBlockScanner.  These methods should be separated out to an new interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2966) TestNameNodeMetrics tests can fail under load

2012-03-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226836#comment-13226836
 ] 

Hudson commented on HDFS-2966:
--

Integrated in Hadoop-Hdfs-trunk #980 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/980/])
HDFS-2966 (Revision 1298820)

 Result = SUCCESS
stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1298820
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java


> TestNameNodeMetrics tests can fail under load
> -
>
> Key: HDFS-2966
> URL: https://issues.apache.org/jira/browse/HDFS-2966
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.24.0
> Environment: OS/X running intellij IDEA, firefox, winxp in a 
> virtualbox.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-2966.patch, HDFS-2966.patch, HDFS-2966.patch, 
> HDFS-2966.patch
>
>
> I've managed to recreate HDFS-540 and HDFS-2434 by the simple technique of 
> running the HDFS tests on a desktop with out enough memory for all the 
> programs trying to run. Things got swapped out and the tests failed as the DN 
> heartbeats didn't come in on time.
> the tests both rely on {{waitForDeletion()}} to block the tests until the 
> delete operation has completed, but all it does is sleep for the same number 
> of seconds as there are datanodes. This is too brittle -it may work on a 
> lightly-loaded system, but not on a system under heavy load where it is 
> taking longer to replicate than expect.
> Immediate fix: double, triple, the sleep time?
> Better fix: have the thread block until all the DN heartbeats have finished.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3056) Add an interface for DataBlockScanner logging

2012-03-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226833#comment-13226833
 ] 

Hudson commented on HDFS-3056:
--

Integrated in Hadoop-Hdfs-trunk #980 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/980/])
HDFS-3056: add the new file for the previous commit. (Revision 1299144)
HDFS-3056.  Add a new interface RollingLogs for DataBlockScanner logging. 
(Revision 1299139)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299144
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RollingLogs.java

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299139
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


> Add an interface for DataBlockScanner logging
> -
>
> Key: HDFS-3056
> URL: https://issues.apache.org/jira/browse/HDFS-3056
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.24.0, 0.23.3
>
> Attachments: h3056_20120306.patch, h3056_20120307.patch, 
> h3056_20120307b.patch
>
>
> Some methods in the FSDatasetInterface are used only for logging in 
> DataBlockScanner.  These methods should be separated out to an new interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3056) Add an interface for DataBlockScanner logging

2012-03-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226826#comment-13226826
 ] 

Hudson commented on HDFS-3056:
--

Integrated in Hadoop-Hdfs-0.23-Build #193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/193/])
Merge r1299139 and r1299144 from trunk for HDFS-3056. (Revision 1299146)

 Result = UNSTABLE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299146
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RollingLogs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


> Add an interface for DataBlockScanner logging
> -
>
> Key: HDFS-3056
> URL: https://issues.apache.org/jira/browse/HDFS-3056
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 0.24.0, 0.23.3
>
> Attachments: h3056_20120306.patch, h3056_20120307.patch, 
> h3056_20120307b.patch
>
>
> Some methods in the FSDatasetInterface are used only for logging in 
> DataBlockScanner.  These methods should be separated out to an new interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira