[jira] [Commented] (HDFS-979) FSImage should specify which dirs are missing when refusing to come up

2011-06-27 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13055699#comment-13055699
 ] 

Jim Plush commented on HDFS-979:


sounds good Steve, I'll refactor items 1,2 and 3. I do think 3 would probably 
clean some of the logic up so I'll take a crack at it. 

Also, for #1(code formatting) are you referring to the Sun standards for if 
spacing or is there another doc I should take a look at?

 FSImage should specify which dirs are missing when refusing to come up
 --

 Key: HDFS-979
 URL: https://issues.apache.org/jira/browse/HDFS-979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Assignee: Jim Plush
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-979-take1.txt, HDFS-979-take2.txt


 When {{FSImage}} can't come up as either it has no data or edit dirs, it 
 tells me this
 {code}
 java.io.IOException: All specified directories are not accessible or do not 
 exist.
 {code}
 What it doesn't do is say which of the two attributes are missing. This would 
 be beneficial to anyone trying to track down the problem. Also, I don't think 
 the message is correct. It's bailing out because dataDirs.size() == 0 || 
 editsDirs.size() == 0 , because a list is empty -not because the dirs aren't 
 there, as there hasn't been any validation yet.
 More useful would be
 # Explicit mention of which attributes are null
 # Declare that this is because they are not in the config

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1723) quota errors messages should use the same scale

2011-06-26 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13055129#comment-13055129
 ] 

Jim Plush commented on HDFS-1723:
-

good catch, I'll update the patch and re-submit

 quota errors messages should use the same scale
 ---

 Key: HDFS-1723
 URL: https://issues.apache.org/jira/browse/HDFS-1723
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Allen Wittenauer
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1723-take1.txt, HDFS-1723-take2.txt


 A typical error message looks like this:
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
 of /dir is exceeded: quota=3298534883328 diskspace consumed=5246.0g
 Since the two values are in difference scales and one is replicated vs. not 
 replicated (I think), this isn't very easy for the user to understand.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-929) DFSClient#getBlockSize is unused

2011-06-26 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13055130#comment-13055130
 ] 

Jim Plush commented on HDFS-929:


apologies, I read your comment Maybe you can mark it as deprecated? as you 
wanted it marked deprecated. 

 DFSClient#getBlockSize is unused
 

 Key: HDFS-929
 URL: https://issues.apache.org/jira/browse/HDFS-929
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Eli Collins
Assignee: Jim Plush
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-929-take1.txt


 DFSClient#getBlockSize is unused. Since it's a public class internal to HDFS 
 we just remove it? If not then we should add a unit test.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1723) quota errors messages should use the same scale

2011-06-26 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1723:


Attachment: HDFS-1723-take3.txt

refactoring based on Aaron's comments regarding removing the 
NSQuotaExceededException from the patch as it's not required for the fix.

 quota errors messages should use the same scale
 ---

 Key: HDFS-1723
 URL: https://issues.apache.org/jira/browse/HDFS-1723
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Allen Wittenauer
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1723-take1.txt, HDFS-1723-take2.txt, 
 HDFS-1723-take3.txt


 A typical error message looks like this:
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
 of /dir is exceeded: quota=3298534883328 diskspace consumed=5246.0g
 Since the two values are in difference scales and one is replicated vs. not 
 replicated (I think), this isn't very easy for the user to understand.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-979) FSImage should specify which dirs are missing when refusing to come up

2011-06-26 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13055154#comment-13055154
 ] 

Jim Plush commented on HDFS-979:


There seem to be several other places where a generic message like this is 
displayed...


➺ Ack --java All specified directories are not accessible
hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
134:  All specified directories are not accessible or do not exist.);

hdfs/src/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
188:  All specified directories are not accessible or do not exist.);

hdfs/src/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
201:  All specified directories are not accessible or do not exist.);


 FSImage should specify which dirs are missing when refusing to come up
 --

 Key: HDFS-979
 URL: https://issues.apache.org/jira/browse/HDFS-979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Priority: Minor

 When {{FSImage}} can't come up as either it has no data or edit dirs, it 
 tells me this
 {code}
 java.io.IOException: All specified directories are not accessible or do not 
 exist.
 {code}
 What it doesn't do is say which of the two attributes are missing. This would 
 be beneficial to anyone trying to track down the problem. Also, I don't think 
 the message is correct. It's bailing out because dataDirs.size() == 0 || 
 editsDirs.size() == 0 , because a list is empty -not because the dirs aren't 
 there, as there hasn't been any validation yet.
 More useful would be
 # Explicit mention of which attributes are null
 # Declare that this is because they are not in the config

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-979) FSImage should specify which dirs are missing when refusing to come up

2011-06-26 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-979:
---

Attachment: HDFS-979-take1.txt

Adding a verbose (the more verbose the more helpful?) Exception message should 
either the dataDirs or the editDirs show a size of 0 in the 
recoverTransitionRead method. It will also loop through the directories and if 
they are a file scheme, check to make sure those directories actually do exist 
on disk. 

For example if the Edit dirs was showing an empty list and the directory did 
not exist locally the message would be: 

All specified directories are not accessible or do not exist. Specifically:  
FSImage reports 0 NameNode edit dirs. The config value of 
'dfs.namenode.edits.dir' is: file:///opt/hadoop-doesnotexist/dfs/name.  
Directory: /opt/hadoop-doesnotexist/dfs/name DOES NOT EXIST.

 FSImage should specify which dirs are missing when refusing to come up
 --

 Key: HDFS-979
 URL: https://issues.apache.org/jira/browse/HDFS-979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HDFS-979-take1.txt


 When {{FSImage}} can't come up as either it has no data or edit dirs, it 
 tells me this
 {code}
 java.io.IOException: All specified directories are not accessible or do not 
 exist.
 {code}
 What it doesn't do is say which of the two attributes are missing. This would 
 be beneficial to anyone trying to track down the problem. Also, I don't think 
 the message is correct. It's bailing out because dataDirs.size() == 0 || 
 editsDirs.size() == 0 , because a list is empty -not because the dirs aren't 
 there, as there hasn't been any validation yet.
 More useful would be
 # Explicit mention of which attributes are null
 # Declare that this is because they are not in the config

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-979) FSImage should specify which dirs are missing when refusing to come up

2011-06-26 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-979:
---

Fix Version/s: 0.23.0
 Assignee: Jim Plush
 Release Note: Added more verbose error logging in FSImage should there be 
an issue getting the size of the NameNode data or edits directories.
   Status: Patch Available  (was: Open)

 FSImage should specify which dirs are missing when refusing to come up
 --

 Key: HDFS-979
 URL: https://issues.apache.org/jira/browse/HDFS-979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Assignee: Jim Plush
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-979-take1.txt


 When {{FSImage}} can't come up as either it has no data or edit dirs, it 
 tells me this
 {code}
 java.io.IOException: All specified directories are not accessible or do not 
 exist.
 {code}
 What it doesn't do is say which of the two attributes are missing. This would 
 be beneficial to anyone trying to track down the problem. Also, I don't think 
 the message is correct. It's bailing out because dataDirs.size() == 0 || 
 editsDirs.size() == 0 , because a list is empty -not because the dirs aren't 
 there, as there hasn't been any validation yet.
 More useful would be
 # Explicit mention of which attributes are null
 # Declare that this is because they are not in the config

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-979) FSImage should specify which dirs are missing when refusing to come up

2011-06-26 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-979:
---

Attachment: HDFS-979-take2.txt

Updated to use a StringBuilder in the loop after complaints from Findbugs 
report.

 FSImage should specify which dirs are missing when refusing to come up
 --

 Key: HDFS-979
 URL: https://issues.apache.org/jira/browse/HDFS-979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Assignee: Jim Plush
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-979-take1.txt, HDFS-979-take2.txt


 When {{FSImage}} can't come up as either it has no data or edit dirs, it 
 tells me this
 {code}
 java.io.IOException: All specified directories are not accessible or do not 
 exist.
 {code}
 What it doesn't do is say which of the two attributes are missing. This would 
 be beneficial to anyone trying to track down the problem. Also, I don't think 
 the message is correct. It's bailing out because dataDirs.size() == 0 || 
 editsDirs.size() == 0 , because a list is empty -not because the dirs aren't 
 there, as there hasn't been any validation yet.
 More useful would be
 # Explicit mention of which attributes are null
 # Declare that this is because they are not in the config

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1026) Quota checks fail for small files and quotas

2011-06-25 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13054958#comment-13054958
 ] 

Jim Plush commented on HDFS-1026:
-

Were you thinking adding JavaDoc comments and updating the verifyQuota method 
in FSDirectoy.java? If not if you want to give a helpful pointer I'll be happy 
to make the fix.

 Quota checks fail for small files and quotas
 

 Key: HDFS-1026
 URL: https://issues.apache.org/jira/browse/HDFS-1026
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, name-node
Affects Versions: 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
Reporter: Eli Collins
Priority: Blocker

 If a directory has a quota less than blockSize * numReplicas then you can't 
 add a file to it, even if the file size is less than the quota. This is 
 because FSDirectory#addBlock updates the count assuming at least one block is 
 written in full. We don't know how much of the block will be written when 
 addBlock is called and supporting such small quotas is not important so 
 perhaps we should document this and log an error message instead of making 
 small (blockSize * numReplicas) quotas work.
 {code}
 // check quota limits and updated space consumed
 updateCount(inodes, inodes.length-1, 0, 
 fileINode.getPreferredBlockSize()*fileINode.getReplication(), true);
 {code}
 You can reproduce with the following commands:
 {code}
 $ dd if=/dev/zero of=temp bs=1000 count=64
 $ hadoop fs -mkdir /user/eli/dir
 $ hdfs dfsadmin -setSpaceQuota 191M /user/eli/dir
 $ hadoop fs -put temp /user/eli/dir  # Causes DSQuotaExceededException
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-929) DFSClient#getBlockSize is unused

2011-06-25 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-929:
---

Attachment: HDFS-929-take1.txt

Added the @Deprecated tag to the method as suggested.

 DFSClient#getBlockSize is unused
 

 Key: HDFS-929
 URL: https://issues.apache.org/jira/browse/HDFS-929
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Eli Collins
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-929-take1.txt


 DFSClient#getBlockSize is unused. Since it's a public class internal to HDFS 
 we just remove it? If not then we should add a unit test.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-929) DFSClient#getBlockSize is unused

2011-06-25 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-929:
---

Fix Version/s: 0.23.0
 Assignee: Jim Plush
Affects Version/s: 0.21.0
 Release Note: labeled the method @Deprecated as suggested by dhruba
   Status: Patch Available  (was: Open)

 DFSClient#getBlockSize is unused
 

 Key: HDFS-929
 URL: https://issues.apache.org/jira/browse/HDFS-929
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Eli Collins
Assignee: Jim Plush
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-929-take1.txt


 DFSClient#getBlockSize is unused. Since it's a public class internal to HDFS 
 we just remove it? If not then we should add a unit test.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1321) If service port and main port are the same, there is no clear log message explaining the issue.

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1321:


Attachment: HDFS-1321-take3.txt

Adding in Aaron's comments regarding the formatting and coding guideline changes

 If service port and main port are the same, there is no clear log message 
 explaining the issue.
 ---

 Key: HDFS-1321
 URL: https://issues.apache.org/jira/browse/HDFS-1321
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: gary murry
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1321-take2.txt, HDFS-1321-take3.txt, HDFS-1321.patch


 With the introduction of a service port to the namenode, there is now a 
 chance for user error to set the two port equal.  This will cause the 
 namenode to fail to start up.  It would be nice if there was a log message 
 explaining the port clash.  Or just treat things as if the service port was 
 not specified. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1321) If service port and main port are the same, there is no clear log message explaining the issue.

2011-06-24 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13054676#comment-13054676
 ] 

Jim Plush commented on HDFS-1321:
-

thanks for the info Aaron, those are all definitely reasonable requirements and 
I should have caught those on the first round. Let me know if this patch takes 
care of those issues. 


 If service port and main port are the same, there is no clear log message 
 explaining the issue.
 ---

 Key: HDFS-1321
 URL: https://issues.apache.org/jira/browse/HDFS-1321
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: gary murry
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1321-take2.txt, HDFS-1321-take3.txt, HDFS-1321.patch


 With the introduction of a service port to the namenode, there is now a 
 chance for user error to set the two port equal.  This will cause the 
 namenode to fail to start up.  It would be nice if there was a log message 
 explaining the port clash.  Or just treat things as if the service port was 
 not specified. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1381) MiniDFSCluster documentation refers to out-of-date configuration parameters

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1381:


Attachment: HDFS-1381-take1.txt

literally replaced the mentioned the name.dir and data.dir javadoc mentions 
with their fully qualified ConfigKey names 
DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY and
DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY

 MiniDFSCluster documentation refers to out-of-date configuration parameters
 ---

 Key: HDFS-1381
 URL: https://issues.apache.org/jira/browse/HDFS-1381
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Jakob Homan
Assignee: Jim Plush
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1381-take1.txt


 The javadoc for MiniDFSCluster makes repeated references to setting 
 dfs.name.dir and dfs.data.dir.  These should be replaced with references to 
 DFSConfigKeys' DFS_NAMENODE_NAME_DIR_KEY and DFS_DATANODE_DATA_DIR_KEY, 
 respectively.  The old values are deprecated in DFSConfigKeys, but we should 
 switch to the new values where ever we can.
 Also, a quick search the code shows that TestDFSStorageStateRecovery.java and 
 UpgradeUtilities.java should be updated as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1321) If service port and main port are the same, there is no clear log message explaining the issue.

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1321:


Attachment: HDFS-1321-take4.txt

4 times the charm? :)
ok Hopefully addressed your last two issues, the assert fail was removed from 
the test and the logline now reads as: 

dfs.namenode.rpc-address (localhost/127.0.0.1:9000) and 
dfs.namenode.http-address (/127.0.0.1:9000) configuration keys are bound to the 
same port, unable to start NameNode. Port: 9000

 If service port and main port are the same, there is no clear log message 
 explaining the issue.
 ---

 Key: HDFS-1321
 URL: https://issues.apache.org/jira/browse/HDFS-1321
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: gary murry
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1321-take2.txt, HDFS-1321-take3.txt, 
 HDFS-1321-take4.txt, HDFS-1321.patch


 With the introduction of a service port to the namenode, there is now a 
 chance for user error to set the two port equal.  This will cause the 
 namenode to fail to start up.  It would be nice if there was a log message 
 explaining the port clash.  Or just treat things as if the service port was 
 not specified. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1381) MiniDFSCluster documentation refers to out-of-date configuration parameters

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1381:


Attachment: HDFS-1381-take2.txt

updating with Aaron's recommended changes for using @link tags to the 
Class#Member

 MiniDFSCluster documentation refers to out-of-date configuration parameters
 ---

 Key: HDFS-1381
 URL: https://issues.apache.org/jira/browse/HDFS-1381
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Jakob Homan
Assignee: Jim Plush
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1381-take1.txt, HDFS-1381-take2.txt


 The javadoc for MiniDFSCluster makes repeated references to setting 
 dfs.name.dir and dfs.data.dir.  These should be replaced with references to 
 DFSConfigKeys' DFS_NAMENODE_NAME_DIR_KEY and DFS_DATANODE_DATA_DIR_KEY, 
 respectively.  The old values are deprecated in DFSConfigKeys, but we should 
 switch to the new values where ever we can.
 Also, a quick search the code shows that TestDFSStorageStateRecovery.java and 
 UpgradeUtilities.java should be updated as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1676) DateFormat.getDateTimeInstance() is very expensive, we can cache it to improve performance

2011-06-24 Thread Jim Plush (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13054776#comment-13054776
 ] 

Jim Plush commented on HDFS-1676:
-

Wouldn't this defeat the purpose of having the following reporting like? 
the output looks like:
System.out.println(Time Stamp   Iteration#  Bytes Already Moved  
Bytes Left To Move  Bytes Being Moved);

where timestamp is the current iterations's timestamp, to cache it would show 
an inaccurate time stamp in the report. Considering how heavy the other parts 
of this are you would probably need 10,000 nodes to see a difference in 
performance times.

 DateFormat.getDateTimeInstance() is very expensive, we can cache it to 
 improve performance
 --

 Key: HDFS-1676
 URL: https://issues.apache.org/jira/browse/HDFS-1676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Affects Versions: 0.21.0
Reporter: Xiaoming Shi
  Labels: newbie

 In the file:
 ./hadoop-0.21.0/hdfs/src/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
   line:1520
 In the while loop, DateFormat.getDateTimeInstance()is called in each 
 iteration. We can cache the result by moving it outside the loop or adding a 
 class member.
 This is similar to the Apache bug 
 https://issues.apache.org/bugzilla/show_bug.cgi?id=48778 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-1723) quota errors messages should use the same scale

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush reassigned HDFS-1723:
---

Assignee: Jim Plush

 quota errors messages should use the same scale
 ---

 Key: HDFS-1723
 URL: https://issues.apache.org/jira/browse/HDFS-1723
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Jim Plush
Priority: Minor
  Labels: newbie

 A typical error message looks like this:
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
 of /dir is exceeded: quota=3298534883328 diskspace consumed=5246.0g
 Since the two values are in difference scales and one is replicated vs. not 
 replicated (I think), this isn't very easy for the user to understand.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1723) quota errors messages should use the same scale

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1723:


Fix Version/s: 0.23.0
Affects Version/s: 0.21.0
 Release Note: Updated the Quota exceptions to now use human readable 
output.
   Status: Patch Available  (was: Open)

 quota errors messages should use the same scale
 ---

 Key: HDFS-1723
 URL: https://issues.apache.org/jira/browse/HDFS-1723
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Allen Wittenauer
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1723-take1.txt


 A typical error message looks like this:
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
 of /dir is exceeded: quota=3298534883328 diskspace consumed=5246.0g
 Since the two values are in difference scales and one is replicated vs. not 
 replicated (I think), this isn't very easy for the user to understand.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1723) quota errors messages should use the same scale

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1723:


Attachment: HDFS-1723-take1.txt

Updated NSQuotaExceededException and DSQuotaExceededException to use human 
readable output values. Also added a new test to the TestQuota.java file to 
test for the new human readable values.

 quota errors messages should use the same scale
 ---

 Key: HDFS-1723
 URL: https://issues.apache.org/jira/browse/HDFS-1723
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Allen Wittenauer
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1723-take1.txt


 A typical error message looks like this:
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
 of /dir is exceeded: quota=3298534883328 diskspace consumed=5246.0g
 Since the two values are in difference scales and one is replicated vs. not 
 replicated (I think), this isn't very easy for the user to understand.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1723) quota errors messages should use the same scale

2011-06-24 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1723:


Attachment: HDFS-1723-take2.txt

looks like there was a hard coded check in the testHDFSConf.xml file that 
looked for the actual integer for quota. I updated this xml file to look for 
the human readable numbers instead. The test was also updated to account for 
having a path in the error message.

 quota errors messages should use the same scale
 ---

 Key: HDFS-1723
 URL: https://issues.apache.org/jira/browse/HDFS-1723
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Allen Wittenauer
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1723-take1.txt, HDFS-1723-take2.txt


 A typical error message looks like this:
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
 org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
 of /dir is exceeded: quota=3298534883328 diskspace consumed=5246.0g
 Since the two values are in difference scales and one is replicated vs. not 
 replicated (I think), this isn't very easy for the user to understand.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1321) If service port and main port are the same, there is no clear log message explaining the issue.

2011-06-23 Thread Jim Plush (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Plush updated HDFS-1321:


Attachment: HDFS-1321-take2.txt

did a hard reset and re-applied the changes to try and get a cleaner patch file

 If service port and main port are the same, there is no clear log message 
 explaining the issue.
 ---

 Key: HDFS-1321
 URL: https://issues.apache.org/jira/browse/HDFS-1321
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: gary murry
Assignee: Jim Plush
Priority: Minor
  Labels: newbie
 Fix For: 0.23.0

 Attachments: HDFS-1321-take2.txt, HDFS-1321.patch


 With the introduction of a service port to the namenode, there is now a 
 chance for user error to set the two port equal.  This will cause the 
 namenode to fail to start up.  It would be nice if there was a log message 
 explaining the port clash.  Or just treat things as if the service port was 
 not specified. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira