[jira] [Updated] (HDFS-5180) Add time taken to process the command to audit log

2013-10-04 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Attachment: HDFS-5180.patch

I attach a patch file.
The request of the processing time that is longer than threshold outputs it in 
log. In this method, we can confirm a request to have abnormal possibilities.

> Add time taken to process the command to audit log
> --
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
> Attachments: HDFS-5180.patch
>
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Add time taken to process the command to audit log

2013-10-04 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Add time taken to process the command to audit log
> --
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
> Attachments: HDFS-5180.patch
>
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2013-10-07 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788724#comment-13788724
 ] 

Shinichi Yamashita commented on HDFS-5040:
--

It seems that "allowSnapshot" and "disallowSnapshot" have been already 
implemented in trunk.
And it is possible to let NameNode's audit log output the remaining commands 
except "refreshNameNodes" and "deleteBlockPool".

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Raghu C Doppalapudi
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2013-10-07 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5040:
-

Attachment: HDFS-5040.patch

I attach a patch file related to previous comment.

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Reporter: Raghu C Doppalapudi
> Attachments: HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2013-10-07 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5040:
-

 Target Version/s: 3.0.0
Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Raghu C Doppalapudi
> Attachments: HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2013-10-07 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5040:
-

Attachment: HDFS-5040.patch

I attach a patch file which I revised.

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Raghu C Doppalapudi
> Attachments: HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2013-10-08 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5040:
-

Attachment: HDFS-5040.patch

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Raghu C Doppalapudi
> Attachments: HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2013-10-09 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: snapshottable-directoryList.png

I attach a image file of snapshottable directory lists.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Haohui Mai
>Priority: Minor
> Attachments: snapshottable-directoryList.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5334) Implement dfshealth.jsp in HTML pages

2013-10-10 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792017#comment-13792017
 ] 

Shinichi Yamashita commented on HDFS-5334:
--

I applied patch and confirmed it. I think that it becomes easy to look.
In addition, it is better that you will change files/directories/blocks 
information and NameNode's heap memory information to table style.

> Implement dfshealth.jsp in HTML pages
> -
>
> Key: HDFS-5334
> URL: https://issues.apache.org/jira/browse/HDFS-5334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5334.000.patch, HDFS-5334.001.patch, 
> HDFS-5334.002.patch, Screen Shot 2013-10-09 at 10.52.37 AM.png
>
>
> Reimplement dfshealth.jsp using client-side JavaScript.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2013-10-10 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: snapshotteddir.png

I attach a snapshot list image file.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Haohui Mai
>Priority: Minor
> Attachments: snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5334) Implement dfshealth.jsp in HTML pages

2013-10-10 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792318#comment-13792318
 ] 

Shinichi Yamashita commented on HDFS-5334:
--

Thank you for your reply. I push forward my thought by other ticket.

> Implement dfshealth.jsp in HTML pages
> -
>
> Key: HDFS-5334
> URL: https://issues.apache.org/jira/browse/HDFS-5334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5334.000.patch, HDFS-5334.001.patch, 
> HDFS-5334.002.patch, Screen Shot 2013-10-09 at 10.52.37 AM.png
>
>
> Reimplement dfshealth.jsp using client-side JavaScript.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2013-10-14 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196.patch

I attach  a patch file about two images of the former comment.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5196.patch, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2013-10-14 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

 Target Version/s: 3.0.0
Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5196.patch, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-14 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5360:


 Summary: Improvement of usage message of renameSnapshot and 
deleteSnapshot
 Key: HDFS-5360
 URL: https://issues.apache.org/jira/browse/HDFS-5360
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Priority: Minor


When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it is 
displayed as follows.

{code}
[hadoop@trunk ~]$ hdfs dfs -createSnapshot
-createSnapshot:  is missing.
Usage: hadoop fs [generic options] -createSnapshot  
[]
{code}

On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" is 
displayed as follows. And there are not kind for the user.

{code}
[hadoop@trunk ~]$ hdfs dfs -renameSnapshot
renameSnapshot: args number not 3: 0

[hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
deleteSnapshot: args number not 2: 0
{code}

It changes "-renameSnapshot" and "-deleteSnapshot" to output the message which 
is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5362) Add SnapshotException to terse exception group

2013-10-14 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5362:


 Summary: Add SnapshotException to terse exception group
 Key: HDFS-5362
 URL: https://issues.apache.org/jira/browse/HDFS-5362
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Priority: Minor


In trunk, a stack trace of SnapshotException is output NameNode's log via 
ipc.Server class.
The trace of the output method is easy for the message of SnapshotException.
So, it should add SnapshotException to terse exception group of 
NameNodeRpcServer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2013-10-15 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196.patch

I attach a patch file which I revised.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-15 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5360:
-

Status: Patch Available  (was: Open)

> Improvement of usage message of renameSnapshot and deleteSnapshot
> -
>
> Key: HDFS-5360
> URL: https://issues.apache.org/jira/browse/HDFS-5360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5360.patch
>
>
> When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it 
> is displayed as follows.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -createSnapshot
> -createSnapshot:  is missing.
> Usage: hadoop fs [generic options] -createSnapshot  
> []
> {code}
> On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" 
> is displayed as follows. And there are not kind for the user.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -renameSnapshot
> renameSnapshot: args number not 3: 0
> [hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
> deleteSnapshot: args number not 2: 0
> {code}
> It changes "-renameSnapshot" and "-deleteSnapshot" to output the message 
> which is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-15 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita reassigned HDFS-5360:


Assignee: Shinichi Yamashita

> Improvement of usage message of renameSnapshot and deleteSnapshot
> -
>
> Key: HDFS-5360
> URL: https://issues.apache.org/jira/browse/HDFS-5360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5360.patch
>
>
> When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it 
> is displayed as follows.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -createSnapshot
> -createSnapshot:  is missing.
> Usage: hadoop fs [generic options] -createSnapshot  
> []
> {code}
> On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" 
> is displayed as follows. And there are not kind for the user.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -renameSnapshot
> renameSnapshot: args number not 3: 0
> [hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
> deleteSnapshot: args number not 2: 0
> {code}
> It changes "-renameSnapshot" and "-deleteSnapshot" to output the message 
> which is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-15 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5360:
-

Attachment: HDFS-5360.patch

I attach a patch.

> Improvement of usage message of renameSnapshot and deleteSnapshot
> -
>
> Key: HDFS-5360
> URL: https://issues.apache.org/jira/browse/HDFS-5360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5360.patch
>
>
> When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it 
> is displayed as follows.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -createSnapshot
> -createSnapshot:  is missing.
> Usage: hadoop fs [generic options] -createSnapshot  
> []
> {code}
> On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" 
> is displayed as follows. And there are not kind for the user.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -renameSnapshot
> renameSnapshot: args number not 3: 0
> [hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
> deleteSnapshot: args number not 2: 0
> {code}
> It changes "-renameSnapshot" and "-deleteSnapshot" to output the message 
> which is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-16 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13797076#comment-13797076
 ] 

Shinichi Yamashita commented on HDFS-5360:
--

Thank you for your comment. I agree with you.
The information of the argument uses a included in USAGE. So, we should confirm 
whether the number of arguments is right.
And I didn't notice about a spelling mistake.

> Improvement of usage message of renameSnapshot and deleteSnapshot
> -
>
> Key: HDFS-5360
> URL: https://issues.apache.org/jira/browse/HDFS-5360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5360.patch
>
>
> When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it 
> is displayed as follows.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -createSnapshot
> -createSnapshot:  is missing.
> Usage: hadoop fs [generic options] -createSnapshot  
> []
> {code}
> On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" 
> is displayed as follows. And there are not kind for the user.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -renameSnapshot
> renameSnapshot: args number not 3: 0
> [hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
> deleteSnapshot: args number not 2: 0
> {code}
> It changes "-renameSnapshot" and "-deleteSnapshot" to output the message 
> which is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-16 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5360:
-

Attachment: HDFS-5360.patch

I attach a revised patch.

> Improvement of usage message of renameSnapshot and deleteSnapshot
> -
>
> Key: HDFS-5360
> URL: https://issues.apache.org/jira/browse/HDFS-5360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5360.patch, HDFS-5360.patch
>
>
> When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it 
> is displayed as follows.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -createSnapshot
> -createSnapshot:  is missing.
> Usage: hadoop fs [generic options] -createSnapshot  
> []
> {code}
> On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" 
> is displayed as follows. And there are not kind for the user.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -renameSnapshot
> renameSnapshot: args number not 3: 0
> [hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
> deleteSnapshot: args number not 2: 0
> {code}
> It changes "-renameSnapshot" and "-deleteSnapshot" to output the message 
> which is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5360) Improvement of usage message of renameSnapshot and deleteSnapshot

2013-10-16 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13797386#comment-13797386
 ] 

Shinichi Yamashita commented on HDFS-5360:
--

It seems that Jenkins's OutofMemoryError occurred at eclipse:eclipse phase. I 
think that patch is no problem.

> Improvement of usage message of renameSnapshot and deleteSnapshot
> -
>
> Key: HDFS-5360
> URL: https://issues.apache.org/jira/browse/HDFS-5360
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5360.patch, HDFS-5360.patch
>
>
> When the argument of "hdfs dfs -createSnapshot" comamnd is inappropriate, it 
> is displayed as follows.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -createSnapshot
> -createSnapshot:  is missing.
> Usage: hadoop fs [generic options] -createSnapshot  
> []
> {code}
> On the other hands, the commands of "-renameSnapshot" and "-deleteSnapshot" 
> is displayed as follows. And there are not kind for the user.
> {code}
> [hadoop@trunk ~]$ hdfs dfs -renameSnapshot
> renameSnapshot: args number not 3: 0
> [hadoop@trunk ~]$ hdfs dfs -deleteSnapshot
> deleteSnapshot: args number not 2: 0
> {code}
> It changes "-renameSnapshot" and "-deleteSnapshot" to output the message 
> which is similar to "-createSnapshot".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5362) Add SnapshotException to terse exception group

2013-10-17 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita reassigned HDFS-5362:


Assignee: Shinichi Yamashita

> Add SnapshotException to terse exception group
> --
>
> Key: HDFS-5362
> URL: https://issues.apache.org/jira/browse/HDFS-5362
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
>
> In trunk, a stack trace of SnapshotException is output NameNode's log via 
> ipc.Server class.
> The trace of the output method is easy for the message of SnapshotException.
> So, it should add SnapshotException to terse exception group of 
> NameNodeRpcServer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5362) Add SnapshotException to terse exception group

2013-10-17 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5362:
-

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Add SnapshotException to terse exception group
> --
>
> Key: HDFS-5362
> URL: https://issues.apache.org/jira/browse/HDFS-5362
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5362.patch
>
>
> In trunk, a stack trace of SnapshotException is output NameNode's log via 
> ipc.Server class.
> The trace of the output method is easy for the message of SnapshotException.
> So, it should add SnapshotException to terse exception group of 
> NameNodeRpcServer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5362) Add SnapshotException to terse exception group

2013-10-17 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5362:
-

Attachment: HDFS-5362.patch

I attach a patch file.

> Add SnapshotException to terse exception group
> --
>
> Key: HDFS-5362
> URL: https://issues.apache.org/jira/browse/HDFS-5362
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5362.patch
>
>
> In trunk, a stack trace of SnapshotException is output NameNode's log via 
> ipc.Server class.
> The trace of the output method is easy for the message of SnapshotException.
> So, it should add SnapshotException to terse exception group of 
> NameNodeRpcServer.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5180) Add time taken to process the command to audit log

2013-10-18 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita reassigned HDFS-5180:


Assignee: Shinichi Yamashita

> Add time taken to process the command to audit log
> --
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-5180.patch
>
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Add time taken to process the command to audit log

2013-10-22 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Attachment: HDFS-5180.patch

> Add time taken to process the command to audit log
> --
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-5180.patch, HDFS-5180.patch
>
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Output the processing time of slow RPC request to node's log

2013-10-23 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Summary: Output the processing time of slow RPC request to node's log  
(was: Add time taken to process the command to audit log)

> Output the processing time of slow RPC request to node's log
> 
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-5180.patch, HDFS-5180.patch
>
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5180) Output the processing time of slow RPC request to node's log

2013-10-23 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13803735#comment-13803735
 ] 

Shinichi Yamashita commented on HDFS-5180:
--

I made a patch for the draft which output slow rpc request to log. I output 
only a slow request in log by setting the threshold in a property.
The default threshold time is 1000 ms. I think that this default value is 
enough long as a RPC request.

> Output the processing time of slow RPC request to node's log
> 
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-5180.patch, HDFS-5180.patch
>
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Output the processing time of slow RPC request to node's log

2013-10-23 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Description: 
In current trunk, it is output at DEBUG level for the processing time of all 
RPC requests to log.
When we treat it by the troubleshooting of the large-scale cluster, it is hard 
to handle the current implementation.
Therefore we should set the threshold and output only a slow RPC to node's log 
to know the abnormal sign.


  was:
Command and ugi are output now by audit log of NameNode. But it is not output 
for the processing time of command to audit log.
For example, we must check which command is a problem when a trouble such as 
the slow down occurred in NameNode.
It should add the processing time to audit log to know the abnormal sign.



> Output the processing time of slow RPC request to node's log
> 
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-5180.patch, HDFS-5180.patch
>
>
> In current trunk, it is output at DEBUG level for the processing time of all 
> RPC requests to log.
> When we treat it by the troubleshooting of the large-scale cluster, it is 
> hard to handle the current implementation.
> Therefore we should set the threshold and output only a slow RPC to node's 
> log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5467) Remove tab characters in hdfs-default.xml

2013-11-09 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5467:
-

Attachment: HDFS-5467.patch

I attach a patch file before I forget this ...

> Remove tab characters in hdfs-default.xml
> -
>
> Key: HDFS-5467
> URL: https://issues.apache.org/jira/browse/HDFS-5467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-5467.patch
>
>
> The retrycache parameters are indented with tabs rather than the normal 2 
> spaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5467) Remove tab characters in hdfs-default.xml

2013-11-09 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5467:
-

Status: Patch Available  (was: Open)

> Remove tab characters in hdfs-default.xml
> -
>
> Key: HDFS-5467
> URL: https://issues.apache.org/jira/browse/HDFS-5467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-5467.patch
>
>
> The retrycache parameters are indented with tabs rather than the normal 2 
> spaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-3215) Block size is logging as zero Even blockrecevied command received by DN

2013-11-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita reassigned HDFS-3215:


Assignee: Shinichi Yamashita

> Block size is logging as zero Even blockrecevied command received by DN 
> 
>
> Key: HDFS-3215
> URL: https://issues.apache.org/jira/browse/HDFS-3215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Shinichi Yamashita
>Priority: Minor
>
> Scenario 1
> ==
> Start NN and DN.
> write file.
> Block size is logging as zero Even blockrecevied command received by DN 
>  *NN log*
> 2012-03-14 20:23:40,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /hadoop-create-user.sh._COPYING_. 
> BP-1166515020-10.18.40.24-1331736264353 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]}
> 2012-03-14 20:24:26,357 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> addStoredBlock: blockMap updated: XXX:50010 is added to 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]} 
> size 0
>  *DN log* 
> 2012-03-14 20:24:17,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving block 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002 src: 
> /XXX:53141 dest: /XXX:50010
> 2012-03-14 20:24:26,517 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /XXX:53141, dest: /XXX:50010, bytes: 512, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_1612873957_1, offset: 0, srvID: 
> DS-1639667928-XXX-50010-1331736284942, blockid: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, duration: 
> 1286482503
> 2012-03-14 20:24:26,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, 
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2012-03-14 20:24:31,533 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
> succeeded for BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-3215) Block size is logging as zero Even blockrecevied command received by DN

2013-11-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3215:
-

Affects Version/s: 3.0.0

> Block size is logging as zero Even blockrecevied command received by DN 
> 
>
> Key: HDFS-3215
> URL: https://issues.apache.org/jira/browse/HDFS-3215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Shinichi Yamashita
>Priority: Minor
>
> Scenario 1
> ==
> Start NN and DN.
> write file.
> Block size is logging as zero Even blockrecevied command received by DN 
>  *NN log*
> 2012-03-14 20:23:40,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /hadoop-create-user.sh._COPYING_. 
> BP-1166515020-10.18.40.24-1331736264353 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]}
> 2012-03-14 20:24:26,357 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> addStoredBlock: blockMap updated: XXX:50010 is added to 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]} 
> size 0
>  *DN log* 
> 2012-03-14 20:24:17,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving block 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002 src: 
> /XXX:53141 dest: /XXX:50010
> 2012-03-14 20:24:26,517 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /XXX:53141, dest: /XXX:50010, bytes: 512, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_1612873957_1, offset: 0, srvID: 
> DS-1639667928-XXX-50010-1331736284942, blockid: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, duration: 
> 1286482503
> 2012-03-14 20:24:26,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, 
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2012-03-14 20:24:31,533 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
> succeeded for BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-3215) Block size is logging as zero Even blockrecevied command received by DN

2013-11-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3215:
-

Attachment: HDFS-3215.patch

BlockInfo is defined as 0 bytes in UNDER_CONSTRUCTION state. This block size is 
not changed, and it is output as "size 0".
On the other hand, DataNode sends block size to NameNode in 
DataNodeProtocol#blockReceivedAndDeleted.
I think that it comes to be able to output block size in UNDER_CONSTRUCTIONby 
applying this information to BlockInfo.

> Block size is logging as zero Even blockrecevied command received by DN 
> 
>
> Key: HDFS-3215
> URL: https://issues.apache.org/jira/browse/HDFS-3215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-3215.patch
>
>
> Scenario 1
> ==
> Start NN and DN.
> write file.
> Block size is logging as zero Even blockrecevied command received by DN 
>  *NN log*
> 2012-03-14 20:23:40,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /hadoop-create-user.sh._COPYING_. 
> BP-1166515020-10.18.40.24-1331736264353 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]}
> 2012-03-14 20:24:26,357 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> addStoredBlock: blockMap updated: XXX:50010 is added to 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]} 
> size 0
>  *DN log* 
> 2012-03-14 20:24:17,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving block 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002 src: 
> /XXX:53141 dest: /XXX:50010
> 2012-03-14 20:24:26,517 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /XXX:53141, dest: /XXX:50010, bytes: 512, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_1612873957_1, offset: 0, srvID: 
> DS-1639667928-XXX-50010-1331736284942, blockid: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, duration: 
> 1286482503
> 2012-03-14 20:24:26,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, 
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2012-03-14 20:24:31,533 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
> succeeded for BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-3215) Block size is logging as zero Even blockrecevied command received by DN

2013-11-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3215:
-

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Block size is logging as zero Even blockrecevied command received by DN 
> 
>
> Key: HDFS-3215
> URL: https://issues.apache.org/jira/browse/HDFS-3215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-3215.patch
>
>
> Scenario 1
> ==
> Start NN and DN.
> write file.
> Block size is logging as zero Even blockrecevied command received by DN 
>  *NN log*
> 2012-03-14 20:23:40,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /hadoop-create-user.sh._COPYING_. 
> BP-1166515020-10.18.40.24-1331736264353 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]}
> 2012-03-14 20:24:26,357 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> addStoredBlock: blockMap updated: XXX:50010 is added to 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]} 
> size 0
>  *DN log* 
> 2012-03-14 20:24:17,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving block 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002 src: 
> /XXX:53141 dest: /XXX:50010
> 2012-03-14 20:24:26,517 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /XXX:53141, dest: /XXX:50010, bytes: 512, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_1612873957_1, offset: 0, srvID: 
> DS-1639667928-XXX-50010-1331736284942, blockid: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, duration: 
> 1286482503
> 2012-03-14 20:24:26,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, 
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2012-03-14 20:24:31,533 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
> succeeded for BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-3215) Block size is logging as zero Even blockrecevied command received by DN

2013-11-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3215:
-

Attachment: HDFS-3215.patch

I changed it to treat block size with an argument based on a previous test 
result.

> Block size is logging as zero Even blockrecevied command received by DN 
> 
>
> Key: HDFS-3215
> URL: https://issues.apache.org/jira/browse/HDFS-3215
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-3215.patch, HDFS-3215.patch
>
>
> Scenario 1
> ==
> Start NN and DN.
> write file.
> Block size is logging as zero Even blockrecevied command received by DN 
>  *NN log*
> 2012-03-14 20:23:40,541 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /hadoop-create-user.sh._COPYING_. 
> BP-1166515020-10.18.40.24-1331736264353 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]}
> 2012-03-14 20:24:26,357 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> addStoredBlock: blockMap updated: XXX:50010 is added to 
> blk_1264419582929433995_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[XXX:50010|RBW]]} 
> size 0
>  *DN log* 
> 2012-03-14 20:24:17,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving block 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002 src: 
> /XXX:53141 dest: /XXX:50010
> 2012-03-14 20:24:26,517 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /XXX:53141, dest: /XXX:50010, bytes: 512, op: HDFS_WRITE, cliID: 
> DFSClient_NONMAPREDUCE_1612873957_1, offset: 0, srvID: 
> DS-1639667928-XXX-50010-1331736284942, blockid: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, duration: 
> 1286482503
> 2012-03-14 20:24:26,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002, 
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2012-03-14 20:24:31,533 INFO 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
> succeeded for BP-1166515020-XXX-1331736264353:blk_1264419582929433995_1002



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-22 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829777#comment-13829777
 ] 

Shinichi Yamashita commented on HDFS-5552:
--

+1 LGTM

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Haohui Mai
> Attachments: HDFS-5552.000.patch, dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5552:


 Summary: Fix wrong information of "Cluster summay" in 
dfshealth.html
 Key: HDFS-5552
 URL: https://issues.apache.org/jira/browse/HDFS-5552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita


"files and directories" + "blocks" = total filesystem object(s). But wrong 
value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5552) Fix wrong information of "Cluster summay" in dfshealth.html

2013-11-21 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5552:
-

Attachment: dfshealth-html.png

I attach a screenshot. It displays as "3 files and directories, 0 blocks = 1 
total filesystem object(s).".

> Fix wrong information of "Cluster summay" in dfshealth.html
> ---
>
> Key: HDFS-5552
> URL: https://issues.apache.org/jira/browse/HDFS-5552
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
> Attachments: dfshealth-html.png
>
>
> "files and directories" + "blocks" = total filesystem object(s). But wrong 
> value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5686) Cannot upgrade the layout version with -49 from -48

2013-12-18 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5686:


 Summary: Cannot upgrade the layout version with -49 from -48
 Key: HDFS-5686
 URL: https://issues.apache.org/jira/browse/HDFS-5686
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
 Environment: CentOS 6.3, JDK 1.6.0_31
Reporter: Shinichi Yamashita


When we upgraded the layout version from -48 to -49, the following exception 
output. 

{code}
2013-12-19 13:02:28,143 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Starting upgrade of image directory /hadoop/data1/dfs/name.
   old LV = -48; old CTime = 0.
   new LV = -49; new CTime = 1387425748143
2013-12-19 13:02:28,160 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Saving image file 
/hadoop/data1/dfs/name/current/fsimage.ckpt_0004837 using no 
compression
2013-12-19 13:02:28,191 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
Unable to save image for /hadoop/data1/dfs/name
java.lang.NoSuchMethodError: 
org.apache.hadoop.hdfs.server.namenode.CacheManager.saveState(Ljava/io/DataOutput;Ljava/lang/String;)V
at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:1037)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:854)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:883)
at java.lang.Thread.run(Thread.java:662)
2013-12-19 13:02:28,193 ERROR org.apache.hadoop.hdfs.server.common.Storage: 
Error reported on storage directory Storage Directory /hadoop/data1/dfs/name
{code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HDFS-5686) Cannot upgrade the layout version with -49 from -48

2013-12-18 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852602#comment-13852602
 ] 

Shinichi Yamashita commented on HDFS-5686:
--

The layout version -48 used it which built trunk in December 11. And -49 built 
trunk four hours ago.

> Cannot upgrade the layout version with -49 from -48
> ---
>
> Key: HDFS-5686
> URL: https://issues.apache.org/jira/browse/HDFS-5686
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
> Environment: CentOS 6.3, JDK 1.6.0_31
>Reporter: Shinichi Yamashita
>
> When we upgraded the layout version from -48 to -49, the following exception 
> output. 
> {code}
> 2013-12-19 13:02:28,143 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Starting upgrade of image directory /hadoop/data1/dfs/name.
>old LV = -48; old CTime = 0.
>new LV = -49; new CTime = 1387425748143
> 2013-12-19 13:02:28,160 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Saving image file 
> /hadoop/data1/dfs/name/current/fsimage.ckpt_0004837 using no 
> compression
> 2013-12-19 13:02:28,191 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Unable to save image for /hadoop/data1/dfs/name
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hdfs.server.namenode.CacheManager.saveState(Ljava/io/DataOutput;Ljava/lang/String;)V
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:1037)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:854)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:883)
> at java.lang.Thread.run(Thread.java:662)
> 2013-12-19 13:02:28,193 ERROR org.apache.hadoop.hdfs.server.common.Storage: 
> Error reported on storage directory Storage Directory /hadoop/data1/dfs/name
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Resolved] (HDFS-5686) Cannot upgrade the layout version with -49 from -48

2013-12-18 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita resolved HDFS-5686.
--

Resolution: Not A Problem

I prepared for build environment separately. And Hadoop worked when I built at 
that environment.
Thank you.

> Cannot upgrade the layout version with -49 from -48
> ---
>
> Key: HDFS-5686
> URL: https://issues.apache.org/jira/browse/HDFS-5686
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
> Environment: CentOS 6.3, JDK 1.6.0_31
>Reporter: Shinichi Yamashita
>
> When we upgraded the layout version from -48 to -49, the following exception 
> output. 
> {code}
> 2013-12-19 13:02:28,143 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Starting upgrade of image directory /hadoop/data1/dfs/name.
>old LV = -48; old CTime = 0.
>new LV = -49; new CTime = 1387425748143
> 2013-12-19 13:02:28,160 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Saving image file 
> /hadoop/data1/dfs/name/current/fsimage.ckpt_0004837 using no 
> compression
> 2013-12-19 13:02:28,191 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Unable to save image for /hadoop/data1/dfs/name
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hdfs.server.namenode.CacheManager.saveState(Ljava/io/DataOutput;Ljava/lang/String;)V
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:1037)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:854)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:883)
> at java.lang.Thread.run(Thread.java:662)
> 2013-12-19 13:02:28,193 ERROR org.apache.hadoop.hdfs.server.common.Storage: 
> Error reported on storage directory Storage Directory /hadoop/data1/dfs/name
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HDFS-5292) clean up output of `dfs -du -s`

2014-01-15 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873132#comment-13873132
 ] 

Shinichi Yamashita commented on HDFS-5292:
--

+1, LGTM. I confirmed that the output of the command was displayed in UNIX-like.

> clean up output of `dfs -du -s`
> ---
>
> Key: HDFS-5292
> URL: https://issues.apache.org/jira/browse/HDFS-5292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Nick Dimiduk
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-5292.patch
>
>
> This could be formatted a little nicer:
> {noformat}
> $ hdfs dfs -du -s /apps/hbase/data/data/default/*
> 22604541341  /apps/hbase/data/data/default/IntegrationTestBulkLoad
> 896656491  /apps/hbase/data/data/default/IntegrationTestIngest
> 33776145312  /apps/hbase/data/data/default/IntegrationTestLoadAndVerify
> 83512463  /apps/hbase/data/data/default/SendTracesTable
> 532898  /apps/hbase/data/data/default/TestAcidGuarantees
> 27294  /apps/hbase/data/data/default/demo_table
> 1410  /apps/hbase/data/data/default/example
> 2531532801  /apps/hbase/data/data/default/loadtest_d1
> 901  /apps/hbase/data/data/default/table_qho71mpvj8
> 1433  /apps/hbase/data/data/default/tcreatetbl
> 1690  /apps/hbase/data/data/default/tdelrowtbl
> 360  /apps/hbase/data/data/default/testtbl1
> 360  /apps/hbase/data/data/default/testtbl2
> 360  /apps/hbase/data/data/default/testtbl3
> 1515  /apps/hbase/data/data/default/tquerytbl
> 1513  /apps/hbase/data/data/default/tscantbl
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5292) clean up output of `dfs -du -s`

2014-01-15 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5292:
-

Hadoop Flags: Reviewed

> clean up output of `dfs -du -s`
> ---
>
> Key: HDFS-5292
> URL: https://issues.apache.org/jira/browse/HDFS-5292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Nick Dimiduk
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HDFS-5292.patch
>
>
> This could be formatted a little nicer:
> {noformat}
> $ hdfs dfs -du -s /apps/hbase/data/data/default/*
> 22604541341  /apps/hbase/data/data/default/IntegrationTestBulkLoad
> 896656491  /apps/hbase/data/data/default/IntegrationTestIngest
> 33776145312  /apps/hbase/data/data/default/IntegrationTestLoadAndVerify
> 83512463  /apps/hbase/data/data/default/SendTracesTable
> 532898  /apps/hbase/data/data/default/TestAcidGuarantees
> 27294  /apps/hbase/data/data/default/demo_table
> 1410  /apps/hbase/data/data/default/example
> 2531532801  /apps/hbase/data/data/default/loadtest_d1
> 901  /apps/hbase/data/data/default/table_qho71mpvj8
> 1433  /apps/hbase/data/data/default/tcreatetbl
> 1690  /apps/hbase/data/data/default/tdelrowtbl
> 360  /apps/hbase/data/data/default/testtbl1
> 360  /apps/hbase/data/data/default/testtbl2
> 360  /apps/hbase/data/data/default/testtbl3
> 1515  /apps/hbase/data/data/default/tquerytbl
> 1513  /apps/hbase/data/data/default/tscantbl
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-04 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita reassigned HDFS-5196:


Assignee: Shinichi Yamashita

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-04 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13919340#comment-13919340
 ] 

Shinichi Yamashita commented on HDFS-5196:
--

OK. I will attach a new patch by the end of this week.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-06 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Status: Open  (was: Patch Available)

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-06 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: snapshot-new-webui.png

I attach a new webui screenshot.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-07 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Status: Patch Available  (was: Open)

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-07 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196.patch

I attach a patch file for new Web UI.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-07 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924555#comment-13924555
 ] 

Shinichi Yamashita commented on HDFS-5196:
--

Thank you for your comment.
I will change a patch not to access of legacy web UI in new UI.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-11 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-2.patch

I attach a patch file using only new web UI.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196.patch, HDFS-5196.patch, 
> HDFS-5196.patch, snapshot-new-webui.png, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-18 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-3.patch

Thank you for your comment.
I attach the patch file which I changed to use MXBean and JMX.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196.patch, 
> HDFS-5196.patch, HDFS-5196.patch, snapshot-new-webui.png, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-18 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-4.patch

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, snapshot-new-webui.png, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-5.patch

I attach the patch which added a test cord to a former patch.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Status: Patch Available  (was: Open)

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Status: Open  (was: Patch Available)

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-19 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-6.patch

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196.patch, HDFS-5196.patch, 
> HDFS-5196.patch, snapshot-new-webui.png, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-19 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13941402#comment-13941402
 ] 

Shinichi Yamashita commented on HDFS-5196:
--

Thank you for your review.

{quote}
Looking at the code, it might make more sense to create a MXBean of the 
SnapshotManager to record all the information. The motivation is that both 
FSNamesystemState and NameNodeInfo are frequently queried but the snapshot 
information is not. That way also allows the UI makes only one HTTP call 
instead of two. What do you think?
{quote}

I think so too.
I think that we should be able to get Snapshot Info in form such as 
"Hadoop:service=NameNode,name=SnapshotInfo,".
I will attach the new patch which I implemented for a new idea.



> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196.patch, HDFS-5196.patch, 
> HDFS-5196.patch, snapshot-new-webui.png, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-20 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-7.patch

I attach the patch file which reflected former discussion.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196.patch, 
> HDFS-5196.patch, HDFS-5196.patch, snapshot-new-webui.png, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-20 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13942765#comment-13942765
 ] 

Shinichi Yamashita commented on HDFS-5196:
--

Thank you for your comment.

bq. Do you think it makes sense to move both definitions of the beans to 
SnapshotStatsMXBean? 

You mean that SnapshotStatsMXBean is not neccessary ? Please give me more 
details about this.


> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196.patch, 
> HDFS-5196.patch, HDFS-5196.patch, snapshot-new-webui.png, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-21 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943901#comment-13943901
 ] 

Shinichi Yamashita commented on HDFS-5196:
--

Thank you, I understand your comment!
I will fix my patch.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196.patch, 
> HDFS-5196.patch, HDFS-5196.patch, snapshot-new-webui.png, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-24 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-8.patch

I attach the new patch file.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196-8.patch, 
> HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, snapshot-new-webui.png, 
> snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-25 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-9.patch

Thank you for your comment ! I attach the renew patch :
* remove trailing whitespace
* move helpler_to_permission from explorer.js to dfs-dust.js and change 
explorer.html
* use the SIZE helper

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196-8.patch, 
> HDFS-5196-9.patch, HDFS-5196.patch, HDFS-5196.patch, HDFS-5196.patch, 
> snapshot-new-webui.png, snapshottable-directoryList.png, snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-25 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5196:
-

Attachment: HDFS-5196-9.patch

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196-8.patch, 
> HDFS-5196-9.patch, HDFS-5196-9.patch, HDFS-5196.patch, HDFS-5196.patch, 
> HDFS-5196.patch, snapshot-new-webui.png, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5196) Provide more snapshot information in WebUI

2014-03-25 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947412#comment-13947412
 ] 

Shinichi Yamashita commented on HDFS-5196:
--

[~wheat9], Thank you for your message. 
I would like to think about how to get snapshot information simply in HDFS-6156.

> Provide more snapshot information in WebUI
> --
>
> Key: HDFS-5196
> URL: https://issues.apache.org/jira/browse/HDFS-5196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HDFS-5196-2.patch, HDFS-5196-3.patch, HDFS-5196-4.patch, 
> HDFS-5196-5.patch, HDFS-5196-6.patch, HDFS-5196-7.patch, HDFS-5196-8.patch, 
> HDFS-5196-9.patch, HDFS-5196-9.patch, HDFS-5196.patch, HDFS-5196.patch, 
> HDFS-5196.patch, snapshot-new-webui.png, snapshottable-directoryList.png, 
> snapshotteddir.png
>
>
> The WebUI should provide more detailed information about snapshots, such as 
> all snapshottable directories and corresponding number of snapshots 
> (suggested in HDFS-4096).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6156) Simplify the JMX API that provides snapshot information

2014-03-26 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947701#comment-13947701
 ] 

Shinichi Yamashita commented on HDFS-6156:
--

I think that the following methods are necessary to complete your proposal.

{code}
public SnapshottableDirectory.Bean[] getSnapshottableDirectories();
public Snapshots.Bean[] getSnapshottableDirectories();
{code}

Let me clarify my understanding. 


> Simplify the JMX API that provides snapshot information
> ---
>
> Key: HDFS-6156
> URL: https://issues.apache.org/jira/browse/HDFS-6156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>
> HDFS-5196 introduces a set of new APIs that provide snapshot information 
> through JMX. Currently. The API nests {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}}, creating another layer of composition.
> This jira proposes to inline {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}} and to simplify the API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6156) Simplify the JMX API that provides snapshot information

2014-03-27 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6156:
-

Attachment: HDFS-6156.patch

> Simplify the JMX API that provides snapshot information
> ---
>
> Key: HDFS-6156
> URL: https://issues.apache.org/jira/browse/HDFS-6156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
> Attachments: HDFS-6156.patch
>
>
> HDFS-5196 introduces a set of new APIs that provide snapshot information 
> through JMX. Currently. The API nests {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}}, creating another layer of composition.
> This jira proposes to inline {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}} and to simplify the API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6156) Simplify the JMX API that provides snapshot information

2014-03-27 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13949563#comment-13949563
 ] 

Shinichi Yamashita commented on HDFS-6156:
--

Thank you for your comment. And, I think that we do not need to convert List 
into Arrays.
I attach the patch file.

> Simplify the JMX API that provides snapshot information
> ---
>
> Key: HDFS-6156
> URL: https://issues.apache.org/jira/browse/HDFS-6156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
> Attachments: HDFS-6156.patch
>
>
> HDFS-5196 introduces a set of new APIs that provide snapshot information 
> through JMX. Currently. The API nests {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}}, creating another layer of composition.
> This jira proposes to inline {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}} and to simplify the API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6156) Simplify the JMX API that provides snapshot information

2014-03-27 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6156:
-

Assignee: Shinichi Yamashita
  Status: Patch Available  (was: Open)

> Simplify the JMX API that provides snapshot information
> ---
>
> Key: HDFS-6156
> URL: https://issues.apache.org/jira/browse/HDFS-6156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6156.patch
>
>
> HDFS-5196 introduces a set of new APIs that provide snapshot information 
> through JMX. Currently. The API nests {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}}, creating another layer of composition.
> This jira proposes to inline {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}} and to simplify the API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6156) Simplify the JMX API that provides snapshot information

2014-03-27 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6156:
-

Attachment: HDFS-6156-2.patch

Thank you for your detailed comment!
I attach the patch which I revised about Beans and SnapshottableDirectory list.

> Simplify the JMX API that provides snapshot information
> ---
>
> Key: HDFS-6156
> URL: https://issues.apache.org/jira/browse/HDFS-6156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6156-2.patch, HDFS-6156.patch
>
>
> HDFS-5196 introduces a set of new APIs that provide snapshot information 
> through JMX. Currently. The API nests {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}}, creating another layer of composition.
> This jira proposes to inline {{SnapshotDirectoryMXBean}} into 
> {{SnapshotStatsMXBean}} and to simplify the API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-10 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-4977:


 Summary: Change "Checkpoint Size" of web ui of SecondaryNameNode
 Key: HDFS-4977
 URL: https://issues.apache.org/jira/browse/HDFS-4977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Shinichi Yamashita
Priority: Minor


The checkpoint of SecondaryNameNode after 2.0 is carried out by 
"dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-10 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4977:
-

Attachment: HDFS-4977.patch

I attach a patch file.

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-10 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4977:
-

Status: Patch Available  (was: Open)

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.4-alpha, 3.0.0
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-10 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4977:
-

Attachment: HDFS-4977-2.patch

add test code

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977-2.patch, HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-11 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4977:
-

Attachment: HDFS-4977.patch

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-12 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13707231#comment-13707231
 ] 

Shinichi Yamashita commented on HDFS-4977:
--

Thank you for comment.
The user of Hadoop 2.0 (Apache, HDP 2.0, CDH4) may use SecondaryNameNode 
depending on requirements.And they watch web ui for monitoring and management 
if they use SecondaryNameNode.
They will be wrong when a checkpoint is carried out at size of edits like 
Hadoop 1.0.

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4919) Improve documentation of dfs.permissions.enabled flag.

2013-08-28 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13753142#comment-13753142
 ] 

Shinichi Yamashita commented on HDFS-4919:
--

Hi,
The explanation of "dfs.web.ugi" seems to be old as well as "dfs.permissions". 
It should revise both.

> Improve documentation of dfs.permissions.enabled flag.
> --
>
> Key: HDFS-4919
> URL: https://issues.apache.org/jira/browse/HDFS-4919
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>
> The description of dfs.permissions.enabled in hdfs-default.xml does not state 
> that permissions are always checked on certain calls regardless of this 
> configuration.  The HDFS permissions guide still mentions the deprecated 
> dfs.permissions property instead of the currently supported 
> dfs.permissions.enabled.
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3818) Allow fsck to accept URIs as paths

2013-08-30 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3818:
-

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Allow fsck to accept URIs as paths
> --
>
> Key: HDFS-3818
> URL: https://issues.apache.org/jira/browse/HDFS-3818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Stephen Chu
> Attachments: HDFS-3818.patch
>
>
> Currently, fsck does not accept URIs as paths. 
> {noformat}
> [hdfs@cs-10-20-192-187 ~]# hdfs fsck 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Connecting to namenode via http://cs-10-20-192-187.cloud.cloudera.com:50070
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.20.192.187 for path 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/ at Thu Aug 16 15:48:42 
> PDT 2012
> FSCK ended at Thu Aug 16 15:48:42 PDT 2012 in 1 milliseconds
> Invalid path name Invalid file name: 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Fsck on path 'hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/' FAILED
> {noformat}
> It'd be useful for fsck to accept URIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3818) Allow fsck to accept URIs as paths

2013-08-30 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3818:
-

Attachment: HDFS-3818.patch

Hi,
I create a patch to be able to execute hdfs fsck not only path but also 
URI(hdfs://).
I attach a patch.

> Allow fsck to accept URIs as paths
> --
>
> Key: HDFS-3818
> URL: https://issues.apache.org/jira/browse/HDFS-3818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Stephen Chu
> Attachments: HDFS-3818.patch
>
>
> Currently, fsck does not accept URIs as paths. 
> {noformat}
> [hdfs@cs-10-20-192-187 ~]# hdfs fsck 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Connecting to namenode via http://cs-10-20-192-187.cloud.cloudera.com:50070
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.20.192.187 for path 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/ at Thu Aug 16 15:48:42 
> PDT 2012
> FSCK ended at Thu Aug 16 15:48:42 PDT 2012 in 1 milliseconds
> Invalid path name Invalid file name: 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Fsck on path 'hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/' FAILED
> {noformat}
> It'd be useful for fsck to accept URIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3818) Allow fsck to accept URIs as paths

2013-08-30 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-3818:
-

Attachment: HDFS-3818.patch

I attach a patch which fixed findbugs problem and unit test.

> Allow fsck to accept URIs as paths
> --
>
> Key: HDFS-3818
> URL: https://issues.apache.org/jira/browse/HDFS-3818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Stephen Chu
> Attachments: HDFS-3818.patch, HDFS-3818.patch
>
>
> Currently, fsck does not accept URIs as paths. 
> {noformat}
> [hdfs@cs-10-20-192-187 ~]# hdfs fsck 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Connecting to namenode via http://cs-10-20-192-187.cloud.cloudera.com:50070
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.20.192.187 for path 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/ at Thu Aug 16 15:48:42 
> PDT 2012
> FSCK ended at Thu Aug 16 15:48:42 PDT 2012 in 1 milliseconds
> Invalid path name Invalid file name: 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Fsck on path 'hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/' FAILED
> {noformat}
> It'd be useful for fsck to accept URIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4919) Improve documentation of dfs.permissions.enabled flag.

2013-08-30 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4919:
-

Attachment: HDFS-4919.patch

There are some cases using these properties. And the property name in the 
document should be accurate.
I attach the patch which I changed to a current property name.

> Improve documentation of dfs.permissions.enabled flag.
> --
>
> Key: HDFS-4919
> URL: https://issues.apache.org/jira/browse/HDFS-4919
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
> Attachments: HDFS-4919.patch
>
>
> The description of dfs.permissions.enabled in hdfs-default.xml does not state 
> that permissions are always checked on certain calls regardless of this 
> configuration.  The HDFS permissions guide still mentions the deprecated 
> dfs.permissions property instead of the currently supported 
> dfs.permissions.enabled.
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4919) Improve documentation of dfs.permissions.enabled flag.

2013-08-30 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4919:
-

Target Version/s: 2.1.0-beta, 3.0.0  (was: 3.0.0, 2.1.0-beta)
  Status: Patch Available  (was: Open)

> Improve documentation of dfs.permissions.enabled flag.
> --
>
> Key: HDFS-4919
> URL: https://issues.apache.org/jira/browse/HDFS-4919
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.1.0-beta, 3.0.0
>Reporter: Chris Nauroth
> Attachments: HDFS-4919.patch
>
>
> The description of dfs.permissions.enabled in hdfs-default.xml does not state 
> that permissions are always checked on certain calls regardless of this 
> configuration.  The HDFS permissions guide still mentions the deprecated 
> dfs.permissions property instead of the currently supported 
> dfs.permissions.enabled.
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-09-06 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5041:
-

Attachment: NameNode-dfsnodelist-dead.png

I attach a prototype image of dead datanodes list.

> Add the time of last heartbeat to dead server Web UI
> 
>
> Key: HDFS-5041
> URL: https://issues.apache.org/jira/browse/HDFS-5041
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
> Attachments: NameNode-dfsnodelist-dead.png
>
>
> In Live Server page, there is a column 'Last Contact'.
> On the dead server page, similar column can be added which shows when the 
> last heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-09-06 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5041:
-

Attachment: HDFS-5041.patch

I attach the patch which I showed in the image.

> Add the time of last heartbeat to dead server Web UI
> 
>
> Key: HDFS-5041
> URL: https://issues.apache.org/jira/browse/HDFS-5041
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HDFS-5041.patch, NameNode-dfsnodelist-dead.png
>
>
> In Live Server page, there is a column 'Last Contact'.
> On the dead server page, similar column can be added which shows when the 
> last heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-09-06 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5041:
-

 Target Version/s: 3.0.0
Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Add the time of last heartbeat to dead server Web UI
> 
>
> Key: HDFS-5041
> URL: https://issues.apache.org/jira/browse/HDFS-5041
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HDFS-5041.patch, NameNode-dfsnodelist-dead.png
>
>
> In Live Server page, there is a column 'Last Contact'.
> On the dead server page, similar column can be added which shows when the 
> last heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5180) Add time taken to process the command to audit log

2013-09-10 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-5180:


 Summary: Add time taken to process the command to audit log
 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita


Command and ugi are output now by audit log of NameNode. But it is not output 
for the processing time of command to audit log.
For example, we must check which command is a problem when a trouble such as 
the slow down occurred in NameNode.
It should add the processing time to audit log to know the abnormal sign.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5180) Add time taken to process the command to audit log

2013-09-11 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13764889#comment-13764889
 ] 

Shinichi Yamashita commented on HDFS-5180:
--

Thank you for your comment. As you said, the check in the RPC layer is good. 
And it should output a request of the long processing time.
I consider whether I implement it in RPC layer.

> Add time taken to process the command to audit log
> --
>
> Key: HDFS-5180
> URL: https://issues.apache.org/jira/browse/HDFS-5180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>
> Command and ugi are output now by audit log of NameNode. But it is not output 
> for the processing time of command to audit log.
> For example, we must check which command is a problem when a trouble such as 
> the slow down occurred in NameNode.
> It should add the processing time to audit log to know the abnormal sign.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2016-02-25 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15167074#comment-15167074
 ] 

Shinichi Yamashita commented on HDFS-5040:
--

[~kshukla] Thank you for your message. I hand over this ticket to you.

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Raghu C Doppalapudi
>Assignee: Shinichi Yamashita
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2016-02-25 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5040:
-
Assignee: (was: Shinichi Yamashita)

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Raghu C Doppalapudi
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-07 Thread Shinichi Yamashita (JIRA)
Shinichi Yamashita created HDFS-6833:


 Summary: DirectoryScanner should not register a deleting block 
with memory of DataNode
 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita


When a block is deleted in DataNode, the following messages are usually output.

{code}
2014-08-07 17:53:11,606 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
 Scheduling blk_1073741825_1001 file 
/hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 for deletion
2014-08-07 17:53:11,617 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
 Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
/hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
{code}

However, DirectoryScanner may be executed when DataNode deletes the block in 
the current implementation. And the following messsages are output.

{code}
2014-08-07 17:53:30,519 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
 Scheduling blk_1073741825_1001 file 
/hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 for deletion
2014-08-07 17:53:31,426 INFO 
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
files:0, missing block files:0, missing blocks in memory:1, mismatched blocks:0
2014-08-07 17:53:31,426 WARN 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
  getNumBytes() = 21230663
  getBytesOnDisk()  = 21230663
  getVisibleLength()= 21230663
  getVolume()   = /hadoop/data1/dfs/data/current
  getBlockFile()= 
/hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  unlinked  =false
2014-08-07 17:53:31,531 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
 Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
/hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
{code}

Deleting block information is registered in DataNode's memory.
And when DataNode sends a block report, NameNode receives wrong block 
information.

For example, when we execute recommission or change the number of replication, 
NameNode may delete the right block as "ExcessReplicate" by this problem.
And "Under-Replicated Blocks" and "Missing Blocks" occur.

When DataNode run DirectoryScanner, DataNode should not register a deleting 
block.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-11 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Attachment: HDFS-6833.patch

I attach a patch file which I added the processing about a block deleting.

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-11 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Status: Patch Available  (was: Open)

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-11 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Attachment: HDFS-6833.patch

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch, HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-13 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Attachment: HDFS-6833.patch

I attach a patch file which I added a test case.

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-13 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Attachment: HDFS-6833.patch

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
> HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-13 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Attachment: HDFS-6833.patch

I understood that former patch file caused memory leak in the case of 
directoryScanner = null.
Therefore I attach a patch file which works in directoryScanner = null.

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
> HDFS-6833.patch, HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-14 Thread Shinichi Yamashita (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098207#comment-14098207
 ] 

Shinichi Yamashita commented on HDFS-6833:
--

Test org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover succeeded 
in my environment.

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, 
> HDFS-6833.patch, HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-08-15 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-

Attachment: HDFS-6833-6.patch

Hi [~yzhangal],

Thank you for your review and comments.
I attach a renew patch which reflected your comments.

> DirectoryScanner should not register a deleting block with memory of DataNode
> -
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
> Attachments: HDFS-6833-6.patch, HDFS-6833.patch, HDFS-6833.patch, 
> HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually 
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:11,617 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in 
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Scheduling blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>  for deletion
> 2014-08-07 17:53:31,426 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
> files:0, missing block files:0, missing blocks in memory:1, mismatched 
> blocks:0
> 2014-08-07 17:53:31,426 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
>   getNumBytes() = 21230663
>   getBytesOnDisk()  = 21230663
>   getVisibleLength()= 21230663
>   getVolume()   = /hadoop/data1/dfs/data/current
>   getBlockFile()= 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
>   unlinked  =false
> 2014-08-07 17:53:31,531 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
>  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block 
> information.
> For example, when we execute recommission or change the number of 
> replication, NameNode may delete the right block as "ExcessReplicate" by this 
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting 
> block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >