[jira] [Commented] (HDFS-941) Datanode xceiver protocol should allow reuse of a connection

2011-06-10 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13047039#comment-13047039
 ] 

Nigel Daley commented on HDFS-941:
--

+1 for 0.22.

 Datanode xceiver protocol should allow reuse of a connection
 

 Key: HDFS-941
 URL: https://issues.apache.org/jira/browse/HDFS-941
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: bc Wong
 Attachments: 941.22.txt, 941.22.txt, HDFS-941-1.patch, 
 HDFS-941-2.patch, HDFS-941-3.patch, HDFS-941-3.patch, HDFS-941-4.patch, 
 HDFS-941-5.patch, HDFS-941-6.22.patch, HDFS-941-6.patch, HDFS-941-6.patch, 
 HDFS-941-6.patch, fix-close-delta.txt, hdfs-941.txt, hdfs-941.txt, 
 hdfs-941.txt, hdfs-941.txt, hdfs941-1.png


 Right now each connection into the datanode xceiver only processes one 
 operation.
 In the case that an operation leaves the stream in a well-defined state (eg a 
 client reads to the end of a block successfully) the same connection could be 
 reused for a second operation. This should improve random read performance 
 significantly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1825) Remove thriftfs contrib

2011-05-18 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13035720#comment-13035720
 ] 

Nigel Daley commented on HDFS-1825:
---

Just removed contrib/thriftfs from Jira and add to the Attic wiki: 
http://wiki.apache.org/hadoop/Attic

 Remove thriftfs contrib
 ---

 Key: HDFS-1825
 URL: https://issues.apache.org/jira/browse/HDFS-1825
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Nigel Daley
Assignee: Nigel Daley
 Fix For: 0.22.0

 Attachments: HDFS-1825.patch


 As per vote on general@ 
 (http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cef44cfe2-692f-4956-8b33-d125d05e2...@mac.com%3E)
  thriftfs can be removed: 
 svn remove hdfs/trunk/src/contrib/thriftfs
 and wiki updated:
 http://wiki.apache.org/hadoop/Attic

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1919) Upgrade to federated namespace fails

2011-05-11 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13032250#comment-13032250
 ] 

Nigel Daley commented on HDFS-1919:
---

Was this caused by the federation branch merge?  What testing was done on that 
before merge?

 Upgrade to federated namespace fails
 

 Key: HDFS-1919
 URL: https://issues.apache.org/jira/browse/HDFS-1919
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.23.0

 Attachments: hdfs-1919.txt


 I formatted a namenode running off 0.22 branch, and trying to start it on 
 trunk yields:
 org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
 /tmp/name1 is in an inconsistent state: file VERSION has clusterID mising.
 It looks like 0.22 has LAYOUT_VERSION -33, but trunk has 
 LAST_PRE_FEDERATION_LAYOUT_VERSION = -30, which is incorrect.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1741) Provide a minimal pom file to allow integration of HDFS into Sonar analysis

2011-04-30 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13027413#comment-13027413
 ] 

Nigel Daley commented on HDFS-1741:
---

Cos, once this is committed I'm happy to help with the infra work. 

 Provide a minimal pom file to allow integration of HDFS into Sonar analysis
 ---

 Key: HDFS-1741
 URL: https://issues.apache.org/jira/browse/HDFS-1741
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: HDFS-1741.patch


 In order to user Sonar facility a project has to be either build by Maven or 
 has a special pom 'wrapper'. Let's provide a minimal one to allow just that. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1825) Remove thriftfs contrib

2011-04-23 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1825:
--

Attachment: HDFS-1825.patch

Patch that removes thriftfs and it's build references.

 Remove thriftfs contrib
 ---

 Key: HDFS-1825
 URL: https://issues.apache.org/jira/browse/HDFS-1825
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Nigel Daley
 Fix For: 0.23.0

 Attachments: HDFS-1825.patch


 As per vote on general@ 
 (http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cef44cfe2-692f-4956-8b33-d125d05e2...@mac.com%3E)
  thriftfs can be removed: 
 svn remove hdfs/trunk/src/contrib/thriftfs
 and wiki updated:
 http://wiki.apache.org/hadoop/Attic

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1825) Remove thriftfs contrib

2011-04-23 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1825:
--

Assignee: Nigel Daley
Release Note: Removed thriftfs contrib component.
  Status: Patch Available  (was: Open)

 Remove thriftfs contrib
 ---

 Key: HDFS-1825
 URL: https://issues.apache.org/jira/browse/HDFS-1825
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Nigel Daley
Assignee: Nigel Daley
 Fix For: 0.23.0

 Attachments: HDFS-1825.patch


 As per vote on general@ 
 (http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cef44cfe2-692f-4956-8b33-d125d05e2...@mac.com%3E)
  thriftfs can be removed: 
 svn remove hdfs/trunk/src/contrib/thriftfs
 and wiki updated:
 http://wiki.apache.org/hadoop/Attic

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1825) Remove thriftfs contrib

2011-04-23 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13023620#comment-13023620
 ] 

Nigel Daley commented on HDFS-1825:
---

Thanks for looking at this patch Eli.  I don't see a problem running ant clean 
using the current patch.  Also, doesn't your suggestion then remove clean 
support for fuse-dfs?

 Remove thriftfs contrib
 ---

 Key: HDFS-1825
 URL: https://issues.apache.org/jira/browse/HDFS-1825
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Nigel Daley
Assignee: Nigel Daley
 Fix For: 0.23.0

 Attachments: HDFS-1825.patch


 As per vote on general@ 
 (http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cef44cfe2-692f-4956-8b33-d125d05e2...@mac.com%3E)
  thriftfs can be removed: 
 svn remove hdfs/trunk/src/contrib/thriftfs
 and wiki updated:
 http://wiki.apache.org/hadoop/Attic

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1822) Editlog opcodes overlap between 20 security and later releases

2011-04-19 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13021970#comment-13021970
 ] 

Nigel Daley commented on HDFS-1822:
---

{quote}
Add code in 22 and trunk to throw an error if upgrade is from 203 or older 2xx 
releases and editlog is not empty,...
{quote}
Doesn't seem like this keeps branch-specific hackery confined to the branch.

 Editlog opcodes overlap between 20 security and later releases
 --

 Key: HDFS-1822
 URL: https://issues.apache.org/jira/browse/HDFS-1822
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0, 0.22.0, 0.23.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: HDFS-1822.patch


 Same opcode are used for different operations between 0.20.security, 0.22 and 
 0.23. This results in failure to load editlogs on later release, especially 
 during upgrades.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-988) saveNamespace can corrupt edits log

2011-04-09 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13017997#comment-13017997
 ] 

Nigel Daley commented on HDFS-988:
--

Todd, any update on this for 0.22?

 saveNamespace can corrupt edits log
 ---

 Key: HDFS-988
 URL: https://issues.apache.org/jira/browse/HDFS-988
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20-append, 0.21.0, 0.22.0
Reporter: dhruba borthakur
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.20-append, 0.22.0

 Attachments: hdfs-988.txt, saveNamespace.txt, 
 saveNamespace_20-append.patch


 The adminstrator puts the namenode is safemode and then issues the 
 savenamespace command. This can corrupt the edits log. The problem is that  
 when the NN enters safemode, there could still be pending logSycs occuring 
 from other threads. Now, the saveNamespace command, when executed, would save 
 a edits log with partial writes. I have seen this happen on 0.20.
 https://issues.apache.org/jira/browse/HDFS-909?focusedCommentId=12828853page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12828853

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1505) saveNamespace appears to succeed even if all directories fail to save

2011-04-09 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13017998#comment-13017998
 ] 

Nigel Daley commented on HDFS-1505:
---

Jakob, any update on this for 0.22?

 saveNamespace appears to succeed even if all directories fail to save
 -

 Key: HDFS-1505
 URL: https://issues.apache.org/jira/browse/HDFS-1505
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Todd Lipcon
Assignee: Jakob Homan
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hdfs-1505-test.txt


 After HDFS-1071, saveNamespace now appears to succeed even if all of the 
 individual directories failed to save.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1823) start-dfs.sh script fails if HADOOP_HOME is not set

2011-04-09 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1823:
--

Priority: Blocker  (was: Major)

Marking blocker since it's Hadoop and MapReduce counterparts are blockers.

 start-dfs.sh script fails if HADOOP_HOME is not set
 ---

 Key: HDFS-1823
 URL: https://issues.apache.org/jira/browse/HDFS-1823
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.21.0
Reporter: Tom White
Assignee: Tom White
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1823.patch


 HDFS portion of HADOOP-6953

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1825) Remove thriftfs contrib

2011-04-09 Thread Nigel Daley (JIRA)
Remove thriftfs contrib
---

 Key: HDFS-1825
 URL: https://issues.apache.org/jira/browse/HDFS-1825
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Nigel Daley
 Fix For: 0.23.0


As per vote on general@ 
(http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cef44cfe2-692f-4956-8b33-d125d05e2...@mac.com%3E)
 thriftfs can be removed: 
svn remove hdfs/trunk/src/contrib/thriftfs
and wiki updated:
http://wiki.apache.org/hadoop/Attic

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (HDFS-1602) Fix HADOOP-4885 for it is doesn't work as expected.

2011-02-09 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12992376#comment-12992376
 ] 

Nigel Daley commented on HDFS-1602:
---

FWIW, TestBlockRecovery.testErrorReplicas failed (timed out). This is in the 
same class as the fixed test I think. Search console for failure: 
https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/537/console

Re-running build again.


 Fix HADOOP-4885 for it is doesn't work as expected.
 ---

 Key: HDFS-1602
 URL: https://issues.apache.org/jira/browse/HDFS-1602
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0, 0.23.0
Reporter: Konstantin Boudnik
 Attachments: HDFS-1602-1.patch, HDFS-1602.patch


 NameNode storage restore functionality doesn't work (as HDFS-903 
 demonstrated). This needs to be either disabled, or removed, or fixed. This 
 feature also fails HDFS-1496

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-884) DataNode makeInstance should report the directory list when failing to start up

2011-01-12 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12981024#action_12981024
 ] 

Nigel Daley commented on HDFS-884:
--

Konstantin, if you're trying to kick a new patch build for this you now longer 
move it to Open and back to Patch Available.  Instead, you must upload a 
new patch.  Or, if you have permission, you can kick 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/ and enter the issue 
number.

 DataNode makeInstance should report the directory list when failing to start 
 up
 ---

 Key: HDFS-884
 URL: https://issues.apache.org/jira/browse/HDFS-884
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-884.patch, HDFS-884.patch, InvalidDirs.patch, 
 InvalidDirs.patch


 When {{Datanode.makeInstance()}} cannot work with one of the directories in 
 dfs.data.dir, it logs this at warn level (while losing the stack trace). 
 It should include the nested exception for better troubleshooting. Then, when 
 all dirs in the list fail, an exception is thrown, but this exception does 
 not include the list of directories. It should list the absolute path of 
 every missing/failing directory, so that whoever sees the exception can see 
 where to start looking for problems: either the filesystem or the 
 configuration. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-835) TestDefaultNameNodePort.testGetAddressFromConf fails with an unsupported formate error

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-835:
-

Priority: Minor  (was: Blocker)

Aaron, moving this to minor (the same priority as the issue it depends on).

 TestDefaultNameNodePort.testGetAddressFromConf fails with an unsupported 
 formate error 
 ---

 Key: HDFS-835
 URL: https://issues.apache.org/jira/browse/HDFS-835
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: gary murry
Assignee: Aaron Kimball
Priority: Minor
 Attachments: HDFS-835.patch


 The current build fails on the TestDefaultNameNodePort.testGetAddressFromConf 
 unit test with the following error:
  FileSystem name 'foo' is provided in an unsupported format. (Try 
 'hdfs://foo' instead?)
 http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-Hdfs-trunk/171/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1529) Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1529:
--

Fix Version/s: 0.22.0

Blocker for 0.22

 Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock
 

 Key: HDFS-1529
 URL: https://issues.apache.org/jira/browse/HDFS-1529
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hdfs-1529.txt, hdfs-1529.txt, hdfs-1529.txt, Test.java


 In HDFS-895 the handling of interrupts during hflush/close was changed to 
 preserve interrupt status. This ends up creating an infinite loop in 
 waitForAckedSeqno if the waiting thread gets interrupted, since Object.wait() 
 has a strange semantic that it doesn't give up the lock even momentarily if 
 the thread is already in interrupted state at the beginning of the call.
 We should decide what the correct behavior is here - if a thread is 
 interrupted while it's calling hflush() or close() should we (a) throw an 
 exception, perhaps InterruptedIOException (b) ignore, or (c) wait for the 
 flush to finish but preserve interrupt status on exit?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1186) 0.20: DNs should interrupt writers at start of recovery

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1186:
--

Fix Version/s: 0.20-append

Likely only a blocker for 0.20 append branch.

 0.20: DNs should interrupt writers at start of recovery
 ---

 Key: HDFS-1186
 URL: https://issues.apache.org/jira/browse/HDFS-1186
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20-append
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.20-append

 Attachments: hdfs-1186.txt


 When block recovery starts (eg due to NN recovering lease) it needs to 
 interrupt any writers currently writing to those blocks. Otherwise, an old 
 writer (who hasn't realized he lost his lease) can continue to write+sync to 
 the blocks, and thus recovery ends up truncating data that has been sync()ed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-988) saveNamespace can corrupt edits log

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-988:
-

Fix Version/s: 0.22.0

This is committed to 0.20-append but need a unit test for trunk.

 saveNamespace can corrupt edits log
 ---

 Key: HDFS-988
 URL: https://issues.apache.org/jira/browse/HDFS-988
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20-append, 0.21.0, 0.22.0
Reporter: dhruba borthakur
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.20-append, 0.22.0

 Attachments: hdfs-988.txt, saveNamespace.txt, 
 saveNamespace_20-append.patch


 The adminstrator puts the namenode is safemode and then issues the 
 savenamespace command. This can corrupt the edits log. The problem is that  
 when the NN enters safemode, there could still be pending logSycs occuring 
 from other threads. Now, the saveNamespace command, when executed, would save 
 a edits log with partial writes. I have seen this happen on 0.20.
 https://issues.apache.org/jira/browse/HDFS-909?focusedCommentId=12828853page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12828853

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1554) New semantics for recoverLease

2011-01-10 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12979725#action_12979725
 ] 

Nigel Daley commented on HDFS-1554:
---

Hairong, can you please set the Fix Version correctly?  Thx.

 New semantics for recoverLease
 --

 Key: HDFS-1554
 URL: https://issues.apache.org/jira/browse/HDFS-1554
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.20-append, 0.22.0, 0.23.0

 Attachments: appendRecoverLease.patch, appendRecoverLease1.patch


 Current recoverLease API implemented in append 0.20 aims to provide a lighter 
 weight (comparing to using create/append) way to trigger a file's soft lease 
 expiration. From both the use case of hbase and scribe, it could have a 
 stronger semantics: revoking the file's lease, thus starting lease recovery 
 immediately.
 Also I'd like to port this recoverLease API to HDFS 0.22 and trunk since 
 HBase is moving to HDFS 0.22.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1125) Removing a datanode (failed or decommissioned) should not require a namenode restart

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1125:
--

 Priority: Critical  (was: Blocker)
Fix Version/s: (was: 0.22.0)
   Issue Type: Improvement  (was: Bug)

At this point I don't see how this 6 month old unassigned issue is a blocker 
for 0.22.  I also think this is an improvement, not a bug.  Removing from 0.22 
blocker list.

 Removing a datanode (failed or decommissioned) should not require a namenode 
 restart
 

 Key: HDFS-1125
 URL: https://issues.apache.org/jira/browse/HDFS-1125
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.20.2
Reporter: Alex Loddengaard
Priority: Critical

 I've heard of several Hadoop users using dfsadmin -report to monitor the 
 number of dead nodes, and alert if that number is not 0.  This mechanism 
 tends to work pretty well, except when a node is decommissioned or fails, 
 because then the namenode requires a restart for said node to be entirely 
 removed from HDFS.  More details here:
 http://markmail.org/search/?q=decommissioned%20node%20showing%20up%20ad%20dead%20node%20in%20web%20based%09interface%20to%20namenode#query:decommissioned%20node%20showing%20up%20ad%20dead%20node%20in%20web%20based%09interface%20to%20namenode+page:1+mid:7gwqwdkobgfuszb4+state:results
 Removal from the exclude file and a refresh should get rid of the dead node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1505) saveNamespace appears to succeed even if all directories fail to save

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1505:
--

Fix Version/s: 0.22.0

Hi Jakob, are you working on a patch for this for 0.22?  If so, many thanks!  
I'm going to mark this for 0.22.

 saveNamespace appears to succeed even if all directories fail to save
 -

 Key: HDFS-1505
 URL: https://issues.apache.org/jira/browse/HDFS-1505
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Todd Lipcon
Assignee: Jakob Homan
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hdfs-1505-test.txt


 After HDFS-1071, saveNamespace now appears to succeed even if all of the 
 individual directories failed to save.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-671) Documentation change for updated configuration keys.

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-671:
-

Priority: Blocker  (was: Major)

Seems like a blocker for 0.22.

 Documentation change for updated configuration keys.
 

 Key: HDFS-671
 URL: https://issues.apache.org/jira/browse/HDFS-671
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Priority: Blocker
 Fix For: 0.22.0


  HDFS-531, HADOOP-6233 and HDFS-631 have resulted in changes in several 
 config keys. The hadoop documentation needs to be updated to reflect those 
 changes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-884) DataNode makeInstance should report the directory list when failing to start up

2011-01-10 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12979817#action_12979817
 ] 

Nigel Daley commented on HDFS-884:
--

From the patch:

{code}
+try {
+  dn = DataNode.createDataNode(new String[]{}, conf);
+} catch(IOException e) {
+  // expecting exception here
+}
+if(dn != null) dn.shutdown();
{code}

Shouldn't there be a fail() call after the dn assignment line?
If you're updating patch then dn.shutdown() should be on it's own line.

 DataNode makeInstance should report the directory list when failing to start 
 up
 ---

 Key: HDFS-884
 URL: https://issues.apache.org/jira/browse/HDFS-884
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-884.patch, HDFS-884.patch, InvalidDirs.patch, 
 InvalidDirs.patch


 When {{Datanode.makeInstance()}} cannot work with one of the directories in 
 dfs.data.dir, it logs this at warn level (while losing the stack trace). 
 It should include the nested exception for better troubleshooting. Then, when 
 all dirs in the list fail, an exception is thrown, but this exception does 
 not include the list of directories. It should list the absolute path of 
 every missing/failing directory, so that whoever sees the exception can see 
 where to start looking for problems: either the filesystem or the 
 configuration. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1331) dfs -test should work like /bin/test

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1331:
--

Fix Version/s: (was: 0.22.0)
   Issue Type: Improvement  (was: Bug)

Changing to improvement and removing 0.22 fix version.

 dfs -test should work like /bin/test
 

 Key: HDFS-1331
 URL: https://issues.apache.org/jira/browse/HDFS-1331
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.20.2
Reporter: Allen Wittenauer
Priority: Minor

 hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
 to actually use if you are used to the real test command:
 hadoop:
 $hadoop dfs -test -d /nonexist; echo $?
 test: File does not exist: /nonexist
 255
 shell:
 $ test -d /nonexist; echo $?
 1
 a) Why is it spitting out a message? Even so, why is it saying file instead 
 of directory when I used -d?
 b) Why is the return code 255? I realize this is documented as '0' if true.  
 But docs basically say the value is undefined if it isn't.
 c) where is -f?
 d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1333) S3 File Permissions

2011-01-10 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1333:
--

Priority: Critical  (was: Blocker)

Doesn't seem a blocker for any release.  Downgrading to Critical.

 S3 File Permissions
 ---

 Key: HDFS-1333
 URL: https://issues.apache.org/jira/browse/HDFS-1333
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0
 Environment: Hadoop cluster using 3 small Amazon EC2 machines and the 
 S3FileSystem.
 Hadoop compiled from latest trunc: 0.22.0-SNAPSHOT
 core-site:
 fs.default.name=s3://my-s3-bucket
 fs.s3.awsAccessKeyId=[key id omitted]
 fs.s3.awsSecretAccessKey=[secret key omitted]
 hadoop.tmp.dir=/mnt/hadoop.tmp.dir
 hdfs-site: empty
 mapred-site:
 mapred.job.tracker=[domU-XX-XX-XX-XX-XX-XX.compute-1.internal:9001]
 mapred.map.tasks=6
 mapred.reduce.tasks=6
Reporter: Danny Leshem
Priority: Critical

 Till lately I've been using 0.20.2 and everything was ok. Now I'm using the 
 latest trunc 0.22.0-SNAPSHOT and getting the following thrown:
 Exception in thread main java.io.IOException: The ownership/permissions on 
 the staging directory 
 s3://my-s3-bucket/mnt/hadoop.tmp.dir/mapred/staging/root/.staging is not as 
 expected. It is owned by  and permissions are rwxrwxrwx. The directory must 
 be owned by the submitter root or by root and permissions must be rwx--
 at
 org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:107)
 at
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:312)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:961)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:977)
 at com.mycompany.MyJob.runJob(MyJob.java:153)
 at com.mycompany.MyJob.run(MyJob.java:177)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at com.mycompany.MyOtherJob.runJob(MyOtherJob.java:62)
 at com.mycompany.MyOtherJob.run(MyOtherJob.java:112)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at com.mycompany.MyOtherJob.main(MyOtherJob.java:117)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:187)
 (The it is owned by ... and permissions  is not a mistake, seems like the 
 empty string is printed there)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-405) Several unit tests failing on Windows frequently

2011-01-07 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12979039#action_12979039
 ] 

Nigel Daley commented on HDFS-405:
--

Looks like no one cares about test failures on Windows.  Can we close this as 
won't fix?

 Several unit tests failing on Windows frequently
 

 Key: HDFS-405
 URL: https://issues.apache.org/jira/browse/HDFS-405
 Project: Hadoop HDFS
  Issue Type: Test
 Environment: Windows
Reporter: Ramya R
Priority: Minor

 This issue is similar to HADOOP-5114. A huge number of unit tests are failing 
 on Windows on branch  18 consistently. 0.21 is showing the maximum number of 
 failures. Failures on other branches are a subset of failures observed in 
 0.21. Below is the list of failures observed on 0.21.
 * java.io.IOException: Job failed!
 ** TestJobName - testComplexNameWithRegex
 ** TestJobStatusPersistency - testNonPersistency, testPersistency
 ** TestJobSysDirWithDFS - testWithDFS
 ** TestKillCompletedJob - testKillCompJob
 ** TestMiniMRClasspath - testClassPath, testExternalWritable
 ** TestMiniMRDFSCaching - testWithDFS
 ** TestMiniMRDFSSort - testMapReduceSort, testMapReduceSortWithJvmReuse
 ** TestMiniMRLocalFS - testWithLocal
 ** TestMiniMRWithDFS - testWithDFS, testWithDFSWithDefaultPort
 ** TestMiniMRWithDFSWithDistinctUsers - testDistinctUsers
 ** TestMultipleLevelCaching - testMultiLevelCaching
 ** TestQueueManager - testAllEnabledACLForJobSubmission, 
 testEnabledACLForNonDefaultQueue,  testUserEnabledACLForJobSubmission,  
 testGroupsEnabledACLForJobSubmission
 ** TestRackAwareTaskPlacement - testTaskPlacement
 ** TestReduceFetch - testReduceFromDisk, testReduceFromPartialMem, 
 testReduceFromMem
 ** TestSpecialCharactersInOutputPath - testJobWithDFS
 ** TestTTMemoryReporting - testDefaultMemoryValues, testConfiguredMemoryValues
 ** TestTrackerBlacklistAcrossJobs - testBlacklistAcrossJobs
 ** TestUserDefinedCounters - testMapReduceJob
 ** TestDBJob - testRun
 ** TestServiceLevelAuthorization - testServiceLevelAuthorization
 ** TestNoDefaultsJobConf - testNoDefaults
 ** TestBadRecords - testBadMapRed
 ** TestClusterMRNotification - testMR
 ** TestClusterMapReduceTestCase - testMapReduce, testMapReduceRestarting
 ** TestCommandLineJobSubmission - testJobShell
 ** TestCompressedEmptyMapOutputs - 
 testMapReduceSortWithCompressedEmptyMapOutputs
 ** TestCustomOutputCommitter - testCommitter
 ** TestJavaSerialization - testMapReduceJob, testWriteToSequencefile
 ** TestJobClient - testGetCounter, testJobList, testChangingJobPriority
 ** TestJobName - testComplexName
 * java.lang.IllegalArgumentException: Pathname /path from Cpath is not a 
 valid DFS filename.
 ** TestJobQueueInformation - testJobQueues
 ** TestJobInProgress - testRunningTaskCount
 ** TestJobTrackerRestart - testJobTrackerRestart
 * Timeout
 ** TestKillSubProcesses - testJobKill
 ** TestMiniMRMapRedDebugScript - testMapDebugScript
 ** TestControlledMapReduceJob - testControlledMapReduceJob
 ** TestJobInProgressListener - testJobQueueChanges
 ** TestJobKillAndFail - testJobFailAndKill
 * junit.framework.AssertionFailedError
 ** TestMRServerPorts - testJobTrackerPorts, testTaskTrackerPorts
 ** TestMiniMRTaskTempDir - testTaskTempDir
 ** TestTaskFail - testWithDFS
 ** TestTaskLimits - testTaskLimits
 ** TestMapReduceLocal - testWithLocal
 ** TestCLI - testAll
 ** TestHarFileSystem - testArchives
 ** TestTrash - testTrash, testNonDefaultFS
 ** TestHDFSServerPorts - testNameNodePorts, testDataNodePorts, 
 testSecondaryNodePorts
 ** TestHDFSTrash - testNonDefaultFS
 ** TestFileOutputFormat - testCustomFile
 * org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.security.authorize.AuthorizationException: 
 java.security.AccessControlException: access denied 
 ConnectionPermission(org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol)
 ** TestServiceLevelAuthorization - testRefresh
 * junit.framework.ComparisonFailure
 ** TestDistCh - testDistCh
 * java.io.FileNotFoundException
 ** TestCopyFiles - testMapCount

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1552) Remove java5 dependencies from build

2010-12-27 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975439#action_12975439
 ] 

Nigel Daley commented on HDFS-1552:
---

+1.  Looks good.

 Remove java5 dependencies from build
 

 Key: HDFS-1552
 URL: https://issues.apache.org/jira/browse/HDFS-1552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.21.1
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: HDFS-1552.patch


 As the first short-term step let's remove JDK5 dependency from build(s)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1511) 98 Release Audit warnings on trunk and branch-0.22

2010-12-21 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1511:
--

Attachment: HDFS-1511.patch

I just committed the extra 3 lines to get Hudson patch process to work.  
Attaching the new complete patch.

 98 Release Audit warnings on trunk and branch-0.22
 --

 Key: HDFS-1511
 URL: https://issues.apache.org/jira/browse/HDFS-1511
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Nigel Daley
Assignee: Jakob Homan
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: HDFS-1511.patch, HDFS-1511.patch, HDFS-1511.patch, 
 releaseauditWarnings.txt


 There are 98 release audit warnings on trunk. See attached txt file. These 
 must be fixed or filtered out to get back to a reasonably small number of 
 warnings. The OK_RELEASEAUDIT_WARNINGS property in 
 src/test/test-patch.properties should also be set appropriately in the patch 
 that fixes this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1551) fix the pom template's version

2010-12-21 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1551:
--

Hadoop Flags: [Reviewed]

+1.  Looks good.

 fix the pom template's version
 --

 Key: HDFS-1551
 URL: https://issues.apache.org/jira/browse/HDFS-1551
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: hdfs-1551.patch


 pom templates in the ivy folder should be updated to the latest version 
 hadoo-common dependencies.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1496) TestStorageRestore is failing after HDFS-903 fix

2010-12-20 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1496:
--

 Priority: Blocker  (was: Major)
Affects Version/s: 0.23.0
   0.22.0

Causing test failure.  Marking as a blocker for 0.22.

 TestStorageRestore is failing after HDFS-903 fix
 

 Key: HDFS-1496
 URL: https://issues.apache.org/jira/browse/HDFS-1496
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.22.0, 0.23.0
Reporter: Konstantin Boudnik
Assignee: Hairong Kuang
Priority: Blocker

 TestStorageRestore seems to be failing after HDFS-903 commit. Running git 
 bisect confirms it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1511) 98 Release Audit warnings on trunk and branch-0.22

2010-12-20 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12973442#action_12973442
 ] 

Nigel Daley commented on HDFS-1511:
---

Ran a test run of HDFS precommit testing. Hudson is still seeing these 4 issues 
after this patch:

[rat:report]  !? 
trunk/build/hadoop-hdfs-1051334_HDFS-1534_PATCH-12465968/src/docs/src/documentation/resources/images/FI-framework.odg

[rat:report]  !? 
trunk/build/hadoop-hdfs-1051334_HDFS-1534_PATCH-12465968/src/docs/src/documentation/resources/images/hdfsarchitecture.odg

[rat:report]  !? 
trunk/build/hadoop-hdfs-1051334_HDFS-1534_PATCH-12465968/src/docs/src/documentation/resources/images/hdfsdatanodes.odg

[rat:report]  !? 
trunk/build/hadoop-hdfs-1051334_HDFS-1534_PATCH-12465968/src/test/hdfs/org/apache/hadoop/hdfs/hadoop-14-dfs-dir.tgz


 98 Release Audit warnings on trunk and branch-0.22
 --

 Key: HDFS-1511
 URL: https://issues.apache.org/jira/browse/HDFS-1511
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Nigel Daley
Assignee: Jakob Homan
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: HDFS-1511.patch, HDFS-1511.patch, 
 releaseauditWarnings.txt


 There are 98 release audit warnings on trunk. See attached txt file. These 
 must be fixed or filtered out to get back to a reasonably small number of 
 warnings. The OK_RELEASEAUDIT_WARNINGS property in 
 src/test/test-patch.properties should also be set appropriately in the patch 
 that fixes this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1511) 98 Release Audit warnings on trunk and branch-0.22

2010-12-20 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12973465#action_12973465
 ] 

Nigel Daley commented on HDFS-1511:
---

I suspect it's because releaseaudit is run in a workspace that has already had 
tar and doc (forrest) run in it. Like Cos said, I think we just need to add 
*.odg and *.tgz to the exclude list and we should be golden.

 98 Release Audit warnings on trunk and branch-0.22
 --

 Key: HDFS-1511
 URL: https://issues.apache.org/jira/browse/HDFS-1511
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Nigel Daley
Assignee: Jakob Homan
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: HDFS-1511.patch, HDFS-1511.patch, 
 releaseauditWarnings.txt


 There are 98 release audit warnings on trunk. See attached txt file. These 
 must be fixed or filtered out to get back to a reasonably small number of 
 warnings. The OK_RELEASEAUDIT_WARNINGS property in 
 src/test/test-patch.properties should also be set appropriately in the patch 
 that fixes this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1536) Improve HDFS WebUI

2010-12-13 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12971180#action_12971180
 ] 

Nigel Daley commented on HDFS-1536:
---

Hairong, can you add a tooltip that describes the new meaning of this value in 
context?  Something like:

{code}
private String colTxt(String title) {
  return td id=\col + ++colNum + \ title=\ + title + \ ;
}
...
colTxt(Excludes missing blocks.)
{code}

We should probably do this for other fields too, but that's a separate jira.

 Improve HDFS WebUI
 --

 Key: HDFS-1536
 URL: https://issues.apache.org/jira/browse/HDFS-1536
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.23.0

 Attachments: missingBlocksWebUI.patch


 1. Make the missing blocks count accurate;
 2. Make the under replicated blocks count excluding missing blocks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1510) Add test-patch.properties required by test-patch.sh

2010-11-18 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1510:
--

Attachment: HDFS-1510.patch

 Add test-patch.properties required by test-patch.sh
 ---

 Key: HDFS-1510
 URL: https://issues.apache.org/jira/browse/HDFS-1510
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Nigel Daley
Assignee: Nigel Daley
Priority: Minor
 Attachments: HDFS-1510.patch


 Related to HADOOP-7042.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1510) Add test-patch.properties required by test-patch.sh

2010-11-18 Thread Nigel Daley (JIRA)
Add test-patch.properties required by test-patch.sh
---

 Key: HDFS-1510
 URL: https://issues.apache.org/jira/browse/HDFS-1510
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Nigel Daley
Assignee: Nigel Daley
Priority: Minor
 Attachments: HDFS-1510.patch

Related to HADOOP-7042.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-1510) Add test-patch.properties required by test-patch.sh

2010-11-18 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley resolved HDFS-1510.
---

   Resolution: Fixed
Fix Version/s: 0.23.0

I just committed this to get HDFS patch testing back on track.

 Add test-patch.properties required by test-patch.sh
 ---

 Key: HDFS-1510
 URL: https://issues.apache.org/jira/browse/HDFS-1510
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Nigel Daley
Assignee: Nigel Daley
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-1510.patch


 Related to HADOOP-7042.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1511) 98 Release Audit warnings on trunk and branch-0.22

2010-11-18 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933705#action_12933705
 ] 

Nigel Daley commented on HDFS-1511:
---

Looks like the HDFS releaseaudit target doesn't filter out much:

  fileset dir=${dist.dir}
exclude name=CHANGES.txt/
exclude name=docs//
exclude name=lib/jdiff//
  /fileset

The thriftfs gen-* directories seem like obvious candidates to filter out.  Any 
others?


 98 Release Audit warnings on trunk and branch-0.22
 --

 Key: HDFS-1511
 URL: https://issues.apache.org/jira/browse/HDFS-1511
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Nigel Daley
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: releaseauditWarnings.txt


 There are 98 release audit warnings on trunk. See attached txt file. These 
 must be fixed or filtered out to get back to a reasonably small number of 
 warnings. The OK_RELEASEAUDIT_WARNINGS property in 
 src/test/test-patch.properties should also be set appropriately in the patch 
 that fixes this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-11-08 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1035:
--

   Resolution: Fixed
Fix Version/s: 0.22.0
   Status: Resolved  (was: Patch Available)

I just committed this and updated the wiki doc for Eclipse.

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Nigel Daley
 Fix For: 0.22.0

 Attachments: HDFS-1035.patch, HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-10-25 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1035:
--

Attachment: HDFS-1035.patch

Cos, here's a new patch.  It doesn't include contrib components in the 
classpath as there doesn't seem to be a good way to resolve the contrib 
component classpaths via ivy from this top level build.xml file.  Not sure if 
excluding the contrib's (hdfsproxy and thriftfs) from the Eclipse classpath 
matters to anyone.  Comments?

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Nigel Daley
 Attachments: HDFS-1035.patch, HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-10-25 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12924651#action_12924651
 ] 

Nigel Daley commented on HDFS-1035:
---

Ah, you need to pass -E to the patch command when applying the patch so that 
the emptied file is removed.  This is how all Hadoop patches should be applied. 

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Nigel Daley
 Attachments: HDFS-1035.patch, HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-10-22 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923790#action_12923790
 ] 

Nigel Daley commented on HDFS-1035:
---

Good call on needing to update the document.  The reason you didn't notice any 
change is that it produces the exact same .classpath file from ivy deps vs the 
one you previously had from the template file.  If you blow away your 
.classpath file and run this, it should create the exact same file.

I agree that the rest of the comments are nits.  Feel free to fix them and 
upload a new patch.  Otherwise I'll commit this and MAPREDUCE-1592 this weekend.

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Nigel Daley
 Attachments: HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-10-19 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1035:
--

Attachment: HDFS-1035.patch

Here's a patch for HDFS.

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Tom White
 Attachments: HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-10-19 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley reassigned HDFS-1035:
-

Assignee: Nigel Daley  (was: Tom White)

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Nigel Daley
 Attachments: HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1035) Generate Eclipse's .classpath file from Ivy config

2010-10-19 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-1035:
--

Release Note: Added support to auto-generate the Eclipse .classpath file 
from ivy.
  Status: Patch Available  (was: Open)

Tom, can you review?

 Generate Eclipse's .classpath file from Ivy config
 --

 Key: HDFS-1035
 URL: https://issues.apache.org/jira/browse/HDFS-1035
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tom White
Assignee: Nigel Daley
 Attachments: HDFS-1035.patch


 HDFS companion issue for HADOOP-6407.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-783) libhdfs tests brakes code coverage runs with Clover

2009-11-30 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12783777#action_12783777
 ] 

Nigel Daley commented on HDFS-783:
--

+1 code review

 libhdfs tests brakes code coverage runs with Clover
 ---

 Key: HDFS-783
 URL: https://issues.apache.org/jira/browse/HDFS-783
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.22.0

 Attachments: HDFS-783-2.patch, HDFS-783-2.patch, HDFS-783.patch, 
 HDFS-783.patch


 libhdfs test is executed by a script which sets a certain environment for 
 every run. While standalone execution works well, code coverage runs are 
 broken by this test because it tries to executed an instrumented Hadoop code 
 without having clover.jar in its classpath.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-728) Creat a comprehensive functional test for append

2009-10-23 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12769443#action_12769443
 ] 

Nigel Daley commented on HDFS-728:
--

Hairong, please write this as a Junit test.  You can name the test 
appropriately so that it doesn't get picked up by the test target (simply don't 
start the test class name with the word Test).  We should then file a Jira to 
add a functional test target to Ant that can pick up this and other similar 
tests.

 Creat a comprehensive functional test for append
 

 Key: HDFS-728
 URL: https://issues.apache.org/jira/browse/HDFS-728
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.21.0

 Attachments: appendTest.patch, HDFS-728.patch


 This test aims to do
 1. create a file of len1;
 2. reopen the file for append;
 3. write len2 bytes to the file and hflush;
 4. write len3 bytes to the file and close the file;
 5. validate the content of the file.
 Len1 ranges from [0, 2*BLOCK_SIZE+1], len2 ranges from [0, BLOCK_SIZE+1], and 
 len3 ranges from [0, BLOCK_SIZE+1]. The test tries all combination of len1, 
 len2, and len3. To minimize the running time, bytes per checksum is set to be 
 4 bytes, each packet size is set to be bytes per checksum, and each block 
 contains 2 packets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-245) Create symbolic links in HDFS

2009-09-23 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12758995#action_12758995
 ] 

Nigel Daley commented on HDFS-245:
--

Thanks Eli.  The design states:
{quote}
Loops should be avoided by having the client limit the number of links it will 
traverse
{quote}
What about loops within a filesystem?  Does the NN also limit the number of 
links it will traverse? 

You give examples of commands the operate on the link and examples that operate 
on the link target and examples of those that depend on a trailing slash.  
Given this is a design, can you be explicitly and enumerate the commands for 
each of there?  For instance, setReplication, setTimes, du, etc.

What is the new options for fsck to report dangling links?  What does the 
output look like?

What is the new option for distcp to follow symlinks?  

If distcp doesn't follow symlinks, I assume it just copies the symlink.  In 
this case, is the symlink adjusted to point to the source location on the 
source FS?

What does the ls output look like for a symlink?

Do symlinks contribute bytes toward a quota?




 Create symbolic links in HDFS
 -

 Key: HDFS-245
 URL: https://issues.apache.org/jira/browse/HDFS-245
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: 4044_20081030spi.java, designdocv1.txt, 
 HADOOP-4044-strawman.patch, symlink-0.20.0.patch, symLink1.patch, 
 symLink1.patch, symLink11.patch, symLink12.patch, symLink13.patch, 
 symLink14.patch, symLink15.txt, symLink15.txt, symlink16-common.patch, 
 symlink16-hdfs.patch, symlink16-mr.patch, symLink4.patch, symLink5.patch, 
 symLink6.patch, symLink8.patch, symLink9.patch


 HDFS should support symbolic links. A symbolic link is a special type of file 
 that contains a reference to another file or directory in the form of an 
 absolute or relative path and that affects pathname resolution. Programs 
 which read or write to files named by a symbolic link will behave as if 
 operating directly on the target file. However, archiving utilities can 
 handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-646:
-

Hadoop Flags: [Reviewed]

+1 code review.

 missing test-contrib ant target would break hudson patch test process
 -

 Key: HDFS-646
 URL: https://issues.apache.org/jira/browse/HDFS-646
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
Priority: Blocker
 Attachments: hdfs-646.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-526) TestBackupNode is currently flaky and shouldn't be in commit test

2009-08-18 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-526:
-

Tags: ygridqa

 TestBackupNode is currently flaky and shouldn't be in commit test
 -

 Key: HDFS-526
 URL: https://issues.apache.org/jira/browse/HDFS-526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Jakob Homan

 As documented in HDFS-192 TestBackupNode is currently failing regularly and 
 is impacting our continuous integration tests.  Although it has good code 
 coverage value, perhaps it should be removed from the suite until its 
 reliability can be improved?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-540) TestNameNodeMetrics fails intermittently

2009-08-12 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-540:
-

Tags: ygridqa

 TestNameNodeMetrics fails intermittently
 

 Key: HDFS-540
 URL: https://issues.apache.org/jira/browse/HDFS-540
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.21.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.21.0


 TestNameNodeMetrics has strict timing constraint that relies on block 
 management functionality and can fail intermittently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-435) Add orthogonal fault injection mechanism/framework

2009-07-24 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12735118#action_12735118
 ] 

Nigel Daley commented on HDFS-435:
--

Yes, the doc should be converted to forrest and put in 
src/docs/src/documentation/content/xdocs

 Add orthogonal fault injection mechanism/framework
 --

 Key: HDFS-435
 URL: https://issues.apache.org/jira/browse/HDFS-435
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: Fault injection development guide and Framework 
 HowTo.pdf, Fault injection development guide and Framework HowTo.pdf


 It'd be great to have a fault injection mechanism for Hadoop.
 Having such solution in place will allow to increase test coverage of error 
 handling and recovery mechanisms, reduce reproduction time and increase the 
 reproduction rate of the problems.
 Ideally, the system has to be orthogonal to the current code and test base. 
 E.g. faults have to be injected at build time and would have to be 
 configurable, e.g. all faults could be turned off, or only some of them would 
 be allowed to happen. Also, fault injection has to be separated from 
 production build. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.