[jira] [Commented] (HDFS-2136) 1073: Fault injection for StorageDirectory failures during read/write of FSImage/Edits files

2011-07-08 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13061790#comment-13061790
 ] 

Konstantin Boudnik commented on HDFS-2136:
--

Todd, I find this 

{{They're also easy to forget about since they're in a different tree, they're 
slow to run, etc.}} to be a very interesting observation indeed ;) Especially 
about the performance of the tests. Do you have any hard data to back this? I 
would love to see how much slower an FI based test is compared to non-FI one?

 1073: Fault injection for StorageDirectory failures during read/write of 
 FSImage/Edits files
 

 Key: HDFS-2136
 URL: https://issues.apache.org/jira/browse/HDFS-2136
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Matt Foley

 Both HDFS-1955 and HDFS-2135 have observed that it is difficult to unit test 
 such failures.  As a result, regression of HDFS-1955 was only found by 
 careful manual review (thanks, atm!).  Since 1073 is making broad changes to 
 the way these files are read and written, and appropriately putting effort 
 into correct error handling, I propose we make also make it possible to 
 auto-test that error handling.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2138) fix aop.xml to refer to the right hadoop-common.version variable

2011-07-08 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13061798#comment-13061798
 ] 

Konstantin Boudnik commented on HDFS-2138:
--

Giri, patch doesn't apply to trunk. Looks like it has been against 0.22 
considering from this line
noformat
-  property name=project.version value=0.22.0-SNAPSHOT/
noformat

 fix aop.xml to refer to the right hadoop-common.version variable
 

 Key: HDFS-2138
 URL: https://issues.apache.org/jira/browse/HDFS-2138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HDFS-2138.PATCH


 aop.xml refers to hadoop-common version through project.version variable; 
 Instead hadoop-common version should be referred through 
 hadoop-common.version set in ivy/libraries.properties file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1073) Simpler model for Namenode's fs Image and edit Logs

2011-07-08 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13061833#comment-13061833
 ] 

Konstantin Shvachko commented on HDFS-1073:
---

I would also like to ask for some benchmarks to make sure we do not loose in 
performance for NN operations.
NNThorughtput is applicable in the case. But other tests are welcome as well.

 Simpler model for Namenode's fs Image and edit Logs 
 

 Key: HDFS-1073
 URL: https://issues.apache.org/jira/browse/HDFS-1073
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sanjay Radia
Assignee: Todd Lipcon
 Attachments: hdfs-1073-editloading-algos.txt, hdfs-1073.txt, 
 hdfs1073.pdf, hdfs1073.pdf, hdfs1073.pdf, hdfs1073.tex


 The naming and handling of  NN's fsImage and edit logs can be significantly 
 improved resulting simpler and more robust code.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2018) Move all journal stream management code into one place

2011-07-08 Thread Ivan Kelly (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13061844#comment-13061844
 ] 

Ivan Kelly commented on HDFS-2018:
--

Good comments, I have a 3 hour flight later so I'll try to address them on it. 
A few points. Firstly, I'd really like to get this into 0.23. Once this patch 
is in, bookkeeper support can be added as a third party module with very little 
effort for 0.23, and it can be shipped as a builtin in later releases.

The remoteEditLogManifest will only be used by file based journals. For 
bookkeeper the checkpointer will simply be configured to use bookkeeper as the 
editlog. GetImageServlet will be taken out of the equation for edits.

Regarding the startup behaviour, I would need to add getFirstTxId() and 
getLastTxId() members to EditLogInputStream to be able to create a plan 
beforehand. Ill have a go at this later, and try to submit another patch once I 
get back near an internet connection.

 Move all journal stream management code into one place
 --

 Key: HDFS-2018
 URL: https://issues.apache.org/jira/browse/HDFS-2018
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: Edit log branch (HDFS-1073)

 Attachments: HDFS-2018.diff, HDFS-2018.diff, HDFS-2018.diff, 
 HDFS-2018.diff, HDFS-2018.diff, HDFS-2018.diff, HDFS-2018.diff, 
 HDFS-2018.diff, HDFS-2018.diff


 Currently in the HDFS-1073 branch, the code for creating output streams is in 
 FileJournalManager and the code for input streams is in the inspectors. This 
 change does a number of things.
   - Input and Output streams are now created by the JournalManager.
   - FSImageStorageInspectors now deals with URIs when referring to edit logs
   - Recovery of inprogress logs is performed by counting the number of 
 transactions instead of looking at the length of the file.
 The patch for this applies on top of the HDFS-1073 branch + HDFS-2003 patch.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2111) Add tests for ensuring that the DN will start with a few bad data directories (Part 1 of testing DiskChecker)

2011-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13061933#comment-13061933
 ] 

Hudson commented on HDFS-2111:
--

Integrated in Hadoop-Hdfs-trunk #719 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/719/])
HDFS-2111. Add tests for ensuring that the DN will start with a few bad 
data directories. Contributed by Harsh J Chouraria.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1144100
Files : 
* /hadoop/common/trunk/hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java


 Add tests for ensuring that the DN will start with a few bad data directories 
 (Part 1 of testing DiskChecker)
 -

 Key: HDFS-2111
 URL: https://issues.apache.org/jira/browse/HDFS-2111
 Project: Hadoop HDFS
  Issue Type: Test
  Components: data-node, test
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
  Labels: test
 Fix For: 0.23.0

 Attachments: HDFS-2111.r1.diff, HDFS-2111.r1.diff


 Add tests to ensure that given multiple data dirs, if a single is bad, the DN 
 should still start up.
 This is to check DiskChecker's functionality used in instantiating DataNodes

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2131) Tests for HADOOP-7361

2011-07-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13061990#comment-13061990
 ] 

Daryn Sharp commented on HDFS-2131:
---

+1 good job!

 Tests for HADOOP-7361
 -

 Key: HDFS-2131
 URL: https://issues.apache.org/jira/browse/HDFS-2131
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HADOOP-7361-test.patch, HADOOP-7361-test.patch, 
 HADOOP-7361-test.patch




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1981) When namenode goes down while checkpointing and if is started again subsequent Checkpointing is always failing

2011-07-08 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HDFS-1981:
-

Status: Open  (was: Patch Available)

 When namenode goes down while checkpointing and if is started again 
 subsequent Checkpointing is always failing
 --

 Key: HDFS-1981
 URL: https://issues.apache.org/jira/browse/HDFS-1981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
 Environment: Linux
Reporter: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1981-1.patch, HDFS-1981-2.patch, HDFS-1981.patch


 This scenario is applicable in NN and BNN case.
 When the namenode goes down after creating the edits.new, on subsequent 
 restart the divertFileStreams will not happen to edits.new as the edits.new 
 file is already present and the size is zero.
 so on trying to saveCheckPoint an exception occurs 
 2011-05-23 16:38:57,476 WARN org.mortbay.log: /getimage: java.io.IOException: 
 GetImage failed. java.io.IOException: Namenode has an edit log with timestamp 
 of 2011-05-23 16:38:56 but new checkpoint was created using editlog  with 
 timestamp 2011-05-23 16:37:30. Checkpoint Aborted.
 This is a bug or is that the behaviour.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1981) When namenode goes down while checkpointing and if is started again subsequent Checkpointing is always failing

2011-07-08 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HDFS-1981:
-

Attachment: HDFS-1981-2.patch

 When namenode goes down while checkpointing and if is started again 
 subsequent Checkpointing is always failing
 --

 Key: HDFS-1981
 URL: https://issues.apache.org/jira/browse/HDFS-1981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
 Environment: Linux
Reporter: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1981-1.patch, HDFS-1981-2.patch, HDFS-1981.patch


 This scenario is applicable in NN and BNN case.
 When the namenode goes down after creating the edits.new, on subsequent 
 restart the divertFileStreams will not happen to edits.new as the edits.new 
 file is already present and the size is zero.
 so on trying to saveCheckPoint an exception occurs 
 2011-05-23 16:38:57,476 WARN org.mortbay.log: /getimage: java.io.IOException: 
 GetImage failed. java.io.IOException: Namenode has an edit log with timestamp 
 of 2011-05-23 16:38:56 but new checkpoint was created using editlog  with 
 timestamp 2011-05-23 16:37:30. Checkpoint Aborted.
 This is a bug or is that the behaviour.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1981) When namenode goes down while checkpointing and if is started again subsequent Checkpointing is always failing

2011-07-08 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HDFS-1981:
-

Status: Patch Available  (was: Open)

 When namenode goes down while checkpointing and if is started again 
 subsequent Checkpointing is always failing
 --

 Key: HDFS-1981
 URL: https://issues.apache.org/jira/browse/HDFS-1981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
 Environment: Linux
Reporter: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1981-1.patch, HDFS-1981-2.patch, HDFS-1981.patch


 This scenario is applicable in NN and BNN case.
 When the namenode goes down after creating the edits.new, on subsequent 
 restart the divertFileStreams will not happen to edits.new as the edits.new 
 file is already present and the size is zero.
 so on trying to saveCheckPoint an exception occurs 
 2011-05-23 16:38:57,476 WARN org.mortbay.log: /getimage: java.io.IOException: 
 GetImage failed. java.io.IOException: Namenode has an edit log with timestamp 
 of 2011-05-23 16:38:56 but new checkpoint was created using editlog  with 
 timestamp 2011-05-23 16:37:30. Checkpoint Aborted.
 This is a bug or is that the behaviour.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1981) When namenode goes down while checkpointing and if is started again subsequent Checkpointing is always failing

2011-07-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062011#comment-13062011
 ] 

ramkrishna.s.vasudevan commented on HDFS-1981:
--

Hi Todd,

Thanks for your comments..
I have reworked on some of the comments.

I think you have reviewed the old patch and not the patch with the name
HDFS-1981-1.patch

Any way I have corrected some of the comments in the latest patch also

*  As Konstantin said, please use Junit 4 (annotations API) instead of 
Junit 3, and use the MiniDFSCluster builder
Already Addressed in previous patch.  
* typo: NEW_EIDTS_STREAM
have changed this to NEW_EDITS_STREAM
* don't use the string constant dfs.name.dir - there are constants in 
DFSConfigKeys for this
Updated 
* false == editsNew.exists() ?? !editsNew.exists()
Udpated
* TODOs in the test case. don't swallow exceptions
Updated
* you can use IOUtils.cleanup or IOUtils.closeStream in the finally block 
inside of the block
Updated
* no need to clear editsStreams in teardown method - it's an instance var 
so it will be recreated for each case anyway
Updated
* what's the purpose of the setup which creates bImg? It's not used in any 
of the test cases.
   Instead of using the variable bImg, have now created an instance local 
level
* assertion text is wrong: image should be deleted – but it's checking 
that edits.new should be deleted.
Fixed in the previous patch- as per the latest fix told by Konstantin

 When namenode goes down while checkpointing and if is started again 
 subsequent Checkpointing is always failing
 --

 Key: HDFS-1981
 URL: https://issues.apache.org/jira/browse/HDFS-1981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
 Environment: Linux
Reporter: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1981-1.patch, HDFS-1981-2.patch, HDFS-1981.patch


 This scenario is applicable in NN and BNN case.
 When the namenode goes down after creating the edits.new, on subsequent 
 restart the divertFileStreams will not happen to edits.new as the edits.new 
 file is already present and the size is zero.
 so on trying to saveCheckPoint an exception occurs 
 2011-05-23 16:38:57,476 WARN org.mortbay.log: /getimage: java.io.IOException: 
 GetImage failed. java.io.IOException: Namenode has an edit log with timestamp 
 of 2011-05-23 16:38:56 but new checkpoint was created using editlog  with 
 timestamp 2011-05-23 16:37:30. Checkpoint Aborted.
 This is a bug or is that the behaviour.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062023#comment-13062023
 ] 

Daryn Sharp commented on HDFS-2034:
---

+1 Great!  Now we're protected whether or not asserts are enabled.

 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John George
Assignee: John George
Priority: Minor
 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062024#comment-13062024
 ] 

Hadoop QA commented on HDFS-2034:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12485740/HDFS-2034-5.patch
  against trunk revision 1144100.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestDFSShell
  org.apache.hadoop.hdfs.TestHDFSTrash

+1 contrib tests.  The patch passed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/899//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/899//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/899//console

This message is automatically generated.

 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John George
Assignee: John George
Priority: Minor
 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1977) Stop using StringUtils.stringifyException()

2011-07-08 Thread Bharath Mundlapudi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062068#comment-13062068
 ] 

Bharath Mundlapudi commented on HDFS-1977:
--

Todd, If you don't have any further comments on this patch, can you please 
commit this?



 Stop using StringUtils.stringifyException()
 ---

 Key: HDFS-1977
 URL: https://issues.apache.org/jira/browse/HDFS-1977
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Joey Echeverria
Assignee: Bharath Mundlapudi
Priority: Minor
 Attachments: HDFS-1977-1.patch, HDFS-1977-2.patch, HDFS-1977-3.patch


 The old version of the logging APIs didn't support logging stack traces by 
 passing exceptions to the logging methods (e.g. Log.error()). A number of log 
 statements make use of StringUtils.stringifyException() to get around the old 
 behavior. It would be nice if this could get cleaned up to make use of the 
 the logger's stack trace printing. This also gives users more control since 
 you can configure how the stack traces are written to the logs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2138) fix aop.xml to refer to the right hadoop-common.version variable

2011-07-08 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HDFS-2138:
-

Attachment: HDFS-2138-trunk.patch

Thanks Cos

I was fixing this on the MR-279 branch and uploaded the patch without realizing 
that this would not apply for trunk. 

HDFS-2138-trunk.patch will work with trunk. Could you pls take a look?

 fix aop.xml to refer to the right hadoop-common.version variable
 

 Key: HDFS-2138
 URL: https://issues.apache.org/jira/browse/HDFS-2138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HDFS-2138-trunk.patch, HDFS-2138.PATCH


 aop.xml refers to hadoop-common version through project.version variable; 
 Instead hadoop-common version should be referred through 
 hadoop-common.version set in ivy/libraries.properties file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2138) fix aop.xml to refer to the right hadoop-common.version variable

2011-07-08 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HDFS-2138:
-

Status: Patch Available  (was: Open)

 fix aop.xml to refer to the right hadoop-common.version variable
 

 Key: HDFS-2138
 URL: https://issues.apache.org/jira/browse/HDFS-2138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HDFS-2138-trunk.patch, HDFS-2138.PATCH


 aop.xml refers to hadoop-common version through project.version variable; 
 Instead hadoop-common version should be referred through 
 hadoop-common.version set in ivy/libraries.properties file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2138) fix aop.xml to refer to the right hadoop-common.version variable

2011-07-08 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HDFS-2138:
-

Status: Open  (was: Patch Available)

 fix aop.xml to refer to the right hadoop-common.version variable
 

 Key: HDFS-2138
 URL: https://issues.apache.org/jira/browse/HDFS-2138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HDFS-2138-trunk.patch, HDFS-2138.PATCH


 aop.xml refers to hadoop-common version through project.version variable; 
 Instead hadoop-common version should be referred through 
 hadoop-common.version set in ivy/libraries.properties file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062110#comment-13062110
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2134:
--

Todd, thanks for taking a look.  The ordering of thread interrupt should not 
cause any problem.

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062109#comment-13062109
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2134:
--

Todd, thanks for taking a look.  The ordering of thread interrupt should not 
cause any problem.

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2138) fix aop.xml to refer to the right hadoop-common.version variable

2011-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062124#comment-13062124
 ] 

Hadoop QA commented on HDFS-2138:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12485768/HDFS-2138-trunk.patch
  against trunk revision 1144100.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestHDFSTrash

+1 contrib tests.  The patch passed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/901//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/901//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/901//console

This message is automatically generated.

 fix aop.xml to refer to the right hadoop-common.version variable
 

 Key: HDFS-2138
 URL: https://issues.apache.org/jira/browse/HDFS-2138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HDFS-2138-trunk.patch, HDFS-2138.PATCH


 aop.xml refers to hadoop-common version through project.version variable; 
 Instead hadoop-common version should be referred through 
 hadoop-common.version set in ivy/libraries.properties file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2134:
-

Attachment: h2134_20110708.patch

h2134_20110708.patch: forgot to call {{DatanodeManager.activate(..)}}.

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch, h2134_20110708.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062129#comment-13062129
 ] 

Hadoop QA commented on HDFS-2134:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12485523/h2134_20110706.patch
  against trunk revision 1143147.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/888//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/888//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/888//console

This message is automatically generated.

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch, h2134_20110708.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062130#comment-13062130
 ] 

Hadoop QA commented on HDFS-2134:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12485775/h2134_20110708.patch
  against trunk revision 1144100.

-1 @author.  The patch appears to contain  @author tags which the Hadoop 
community has agreed to not allow in code contributions.

+1 tests included.  The patch appears to include  new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/902//console

This message is automatically generated.

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch, h2134_20110708.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread John George (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062133#comment-13062133
 ] 

John George commented on HDFS-2034:
---

I don't think the above tests fail because of this patch, but I dont see the 
tests failing in the patches before this one... When I run these tests 
manually, they seem to pass as well.

 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John George
Assignee: John George
Priority: Minor
 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062136#comment-13062136
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2134:
--

bq. -1 @author. The patch appears to contain @author tags which the Hadoop 
community has agreed to not allow in code contributions.

There is no author tags in the patch.  Something wrong with hudson?!

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch, h2134_20110708.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062145#comment-13062145
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2034:
--

I agreed that the failed tests are not related.  Some tests failed with
{noformat}
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.init(InetSocketAddress.java:118)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:597)
...
{noformat}


 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John George
Assignee: John George
Priority: Minor
 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2034:
-

   Resolution: Fixed
Fix Version/s: 0.23.0
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, John.

Also thanks Daryn for reviewing it.

 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John George
Assignee: John George
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2034:
-

Component/s: hdfs client

 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: John George
Assignee: John George
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2034) length in getBlockRange becomes -ve when reading only from currently being written blk

2011-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062163#comment-13062163
 ] 

Hudson commented on HDFS-2034:
--

Integrated in Hadoop-Hdfs-trunk-Commit #776 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/776/])
HDFS-2034. Length in DFSInputStream.getBlockRange(..) becomes -ve when 
reading only from a currently being written block. Contributed by John George

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1144480
Files : 
* /hadoop/common/trunk/hdfs/CHANGES.txt
* /hadoop/common/trunk/hdfs/src/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hdfs/src/test/hdfs/org/apache/hadoop/hdfs/TestWriteRead.java


 length in getBlockRange becomes -ve when reading only from currently being 
 written blk
 --

 Key: HDFS-2034
 URL: https://issues.apache.org/jira/browse/HDFS-2034
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: John George
Assignee: John George
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-2034-1.patch, HDFS-2034-1.patch, HDFS-2034-2.patch, 
 HDFS-2034-3.patch, HDFS-2034-4.patch, HDFS-2034-5.patch, HDFS-2034.patch


 This came up during HDFS-1907. Posting an example that Todd posted in 
 HDFS-1907 that brought out this issue.
 {quote}
 Here's an example sequence to describe what I mean:
 1. open file, write one and a half blocks
 2. call hflush
 3. another reader asks for the first byte of the second block
 {quote}
 In this case since offset is greater than the completed block length, the 
 math in getBlockRange() of DFSInputStreamer.java will set length to 
 negative.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2134) Move DecommissionManager to block management

2011-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062198#comment-13062198
 ] 

Hadoop QA commented on HDFS-2134:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12485775/h2134_20110708.patch
  against trunk revision 1144480.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestHDFSTrash

+1 contrib tests.  The patch passed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/903//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/903//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/903//console

This message is automatically generated.

 Move DecommissionManager to block management
 

 Key: HDFS-2134
 URL: https://issues.apache.org/jira/browse/HDFS-2134
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.23.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2134_20110706.patch, h2134_20110708.patch


 Datanode management including {{DecommissionManager}} should belong to block 
 management.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2138) fix aop.xml to refer to the right hadoop-common.version variable

2011-07-08 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062236#comment-13062236
 ] 

Matt Foley commented on HDFS-2138:
--

+1. lgtm.  The failure of unit test TestTrash.testTrashEmptier does not seem to 
be related to this change.  It is probably related to HDFS-7326, although the 
symptoms reported are slightly different.

 fix aop.xml to refer to the right hadoop-common.version variable
 

 Key: HDFS-2138
 URL: https://issues.apache.org/jira/browse/HDFS-2138
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HDFS-2138-trunk.patch, HDFS-2138.PATCH


 aop.xml refers to hadoop-common version through project.version variable; 
 Instead hadoop-common version should be referred through 
 hadoop-common.version set in ivy/libraries.properties file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2131) Tests for HADOOP-7361

2011-07-08 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-2131:
--

Component/s: test

 Tests for HADOOP-7361
 -

 Key: HDFS-2131
 URL: https://issues.apache.org/jira/browse/HDFS-2131
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HADOOP-7361-test.patch, HADOOP-7361-test.patch, 
 HADOOP-7361-test.patch




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2131) Tests for HADOOP-7361

2011-07-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062246#comment-13062246
 ] 

Uma Maheswara Rao G commented on HDFS-2131:
---

Thanks Daryn! for taking a look.

 Tests for HADOOP-7361
 -

 Key: HDFS-2131
 URL: https://issues.apache.org/jira/browse/HDFS-2131
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HADOOP-7361-test.patch, HADOOP-7361-test.patch, 
 HADOOP-7361-test.patch




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1971) HA: Send block report from datanode to both active and standby namenodes

2011-07-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062250#comment-13062250
 ] 

Uma Maheswara Rao G commented on HDFS-1971:
---

Hi Sanjay,
Are you started working on this? If not, can i take this issue?  Because in our 
cluster, we already implemented this mechanism. 
So, we will be happy to contribute our efforts :-)

 HA: Send block report from datanode to both active and standby namenodes
 

 Key: HDFS-1971
 URL: https://issues.apache.org/jira/browse/HDFS-1971
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Suresh Srinivas
Assignee: Sanjay Radia

 To enable hot standby namenode, the standby node must have current 
 information for - namenode state (image + edits) and block location 
 information. This jira addresses keeping the block location information 
 current in the standby node. To do this, the proposed solution is to send 
 block reports from the datanodes to both the active and the standby namenode.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2071) Use of isConnected() in DataXceiver is invalid

2011-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062251#comment-13062251
 ] 

Hudson commented on HDFS-2071:
--

Integrated in Hadoop-Hdfs-22-branch #70 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-22-branch/70/])


 Use of isConnected() in DataXceiver is invalid
 --

 Key: HDFS-2071
 URL: https://issues.apache.org/jira/browse/HDFS-2071
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-2071.patch


 The use of Socket.isConnected() in DataXceiver.run() is not valid. It returns 
 false until the connection is made and then always returns true after that. 
 It will never return false after the initial connection is successfully made. 
 Socket.isClosed() or SocketChannel.isOpen() should be used instead, assuming 
 someone is handling SocketException and does Socket.close() or 
 SocketChannel.close(). It seems the op handlers in DataXceiver are diligently 
 using IOUtils.closeStream(), which will invoke SocketChannel.close().
 {code}
 - } while (s.isConnected()  socketKeepaliveTimeout  0);
 + } while (!s.isClosed()  socketKeepaliveTimeout  0);
 {code}
 The effect of this bug is very minor, as the socket is read again right 
 after. If the connection was closed, the readOp() will throw an EOFException, 
 which is caught and dealt with properly.  The system still functions normally 
 with probably only few microseconds of extra overhead in the premature 
 connection closure cases.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-941) Datanode xceiver protocol should allow reuse of a connection

2011-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062253#comment-13062253
 ] 

Hudson commented on HDFS-941:
-

Integrated in Hadoop-Hdfs-22-branch #70 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-22-branch/70/])


 Datanode xceiver protocol should allow reuse of a connection
 

 Key: HDFS-941
 URL: https://issues.apache.org/jira/browse/HDFS-941
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: bc Wong
 Fix For: 0.22.0

 Attachments: 941.22.txt, 941.22.txt, 941.22.v2.txt, 941.22.v3.txt, 
 HDFS-941-1.patch, HDFS-941-2.patch, HDFS-941-3.patch, HDFS-941-3.patch, 
 HDFS-941-4.patch, HDFS-941-5.patch, HDFS-941-6.22.patch, HDFS-941-6.patch, 
 HDFS-941-6.patch, HDFS-941-6.patch, fix-close-delta.txt, hdfs-941.txt, 
 hdfs-941.txt, hdfs-941.txt, hdfs-941.txt, hdfs941-1.png


 Right now each connection into the datanode xceiver only processes one 
 operation.
 In the case that an operation leaves the stream in a well-defined state (eg a 
 client reads to the end of a block successfully) the same connection could be 
 reused for a second operation. This should improve random read performance 
 significantly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1952) FSEditLog.open() appears to succeed even if all EDITS directories fail

2011-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062252#comment-13062252
 ] 

Hudson commented on HDFS-1952:
--

Integrated in Hadoop-Hdfs-22-branch #70 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-22-branch/70/])


 FSEditLog.open() appears to succeed even if all EDITS directories fail
 --

 Key: HDFS-1952
 URL: https://issues.apache.org/jira/browse/HDFS-1952
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Matt Foley
Assignee: Andrew Wang
  Labels: newbie
 Fix For: 0.22.0, 0.23.0

 Attachments: hdfs-1952-0.22.patch, hdfs-1952.patch, hdfs-1952.patch, 
 hdfs-1952.patch


 FSEditLog.open() appears to succeed even if all of the individual 
 directories failed to allow creation of an EditLogOutputStream.  The problem 
 and solution are essentially similar to that of HDFS-1505.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira