[jira] Created: (HDFS-1512) BlockSender calls deprecated method getReplica

2010-11-19 Thread Eli Collins (JIRA)
BlockSender calls deprecated method getReplica
--

 Key: HDFS-1512
 URL: https://issues.apache.org/jira/browse/HDFS-1512
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins


HDFS-680 deprecated FSDatasetInterface#getReplica, however it is still used by 
BlockSender which still maintains a Replica member.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1513) Fix a number of warnings

2010-11-19 Thread Eli Collins (JIRA)
Fix a number of warnings


 Key: HDFS-1513
 URL: https://issues.apache.org/jira/browse/HDFS-1513
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.22.0, 0.23.0


There are a bunch of warnings besides DeprecatedUTF8, HDFS-1512 and two 
warnings caused by a Java bug (http://bugs.sun.com/view_bug.do?bug_id=646014) 
that we can fix.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1001) DataXceiver and BlockReader disagree on when to send/recv CHECKSUM_OK

2010-11-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-1001:
--

Attachment: HDFS-1001-6.patch

Attached a patch fixes a warning in the last patch.

 DataXceiver and BlockReader disagree on when to send/recv CHECKSUM_OK
 -

 Key: HDFS-1001
 URL: https://issues.apache.org/jira/browse/HDFS-1001
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Reporter: bc Wong
Assignee: bc Wong
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-1001-2.patch, HDFS-1001-3.patch, HDFS-1001-3.patch, 
 HDFS-1001-4.patch, HDFS-1001-5.patch, HDFS-1001-6.patch, 
 HDFS-1001-rebased.patch, HDFS-1001.patch, HDFS-1001.patch.1


 Running the TestPread with additional debug statements reveals that the 
 BlockReader sends CHECKSUM_OK when the DataXceiver doesn't expect it. 
 Currently it doesn't matter since DataXceiver closes the connection after 
 each op, and CHECKSUM_OK is the last thing on the wire. But if we want to 
 cache connections, they need to agree on the exchange of CHECKSUM_OK.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1513) Fix a number of warnings

2010-11-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-1513:
--

Status: Patch Available  (was: Open)

 Fix a number of warnings
 

 Key: HDFS-1513
 URL: https://issues.apache.org/jira/browse/HDFS-1513
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.22.0, 0.23.0

 Attachments: hdfs-1513-1.patch


 There are a bunch of warnings besides DeprecatedUTF8, HDFS-1512 and two 
 warnings caused by a Java bug (http://bugs.sun.com/view_bug.do?bug_id=646014) 
 that we can fix.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1513) Fix a number of warnings

2010-11-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-1513:
--

Attachment: hdfs-1513-1.patch

Patch attached. Fixes almost all the warnings.

 Fix a number of warnings
 

 Key: HDFS-1513
 URL: https://issues.apache.org/jira/browse/HDFS-1513
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.22.0, 0.23.0

 Attachments: hdfs-1513-1.patch


 There are a bunch of warnings besides DeprecatedUTF8, HDFS-1512 and two 
 warnings caused by a Java bug (http://bugs.sun.com/view_bug.do?bug_id=646014) 
 that we can fix.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1001) DataXceiver and BlockReader disagree on when to send/recv CHECKSUM_OK

2010-11-19 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933735#action_12933735
 ] 

Eli Collins commented on HDFS-1001:
---

Unit test results look the same as trunk before the latest patch.
 
{noformat}
 [exec] 
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 9 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] +1 system test framework.  The patch passed system test 
framework compile.
 [exec] 
{noformat}

 DataXceiver and BlockReader disagree on when to send/recv CHECKSUM_OK
 -

 Key: HDFS-1001
 URL: https://issues.apache.org/jira/browse/HDFS-1001
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Reporter: bc Wong
Assignee: bc Wong
Priority: Minor
 Fix For: 0.23.0

 Attachments: HDFS-1001-2.patch, HDFS-1001-3.patch, HDFS-1001-3.patch, 
 HDFS-1001-4.patch, HDFS-1001-5.patch, HDFS-1001-6.patch, 
 HDFS-1001-rebased.patch, HDFS-1001.patch, HDFS-1001.patch.1


 Running the TestPread with additional debug statements reveals that the 
 BlockReader sends CHECKSUM_OK when the DataXceiver doesn't expect it. 
 Currently it doesn't matter since DataXceiver closes the connection after 
 each op, and CHECKSUM_OK is the last thing on the wire. But if we want to 
 cache connections, they need to agree on the exchange of CHECKSUM_OK.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1001) DataXceiver and BlockReader disagree on when to send/recv CHECKSUM_OK

2010-11-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-1001:
--

   Resolution: Fixed
Fix Version/s: 0.22.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch 22.  Thanks bc!

 DataXceiver and BlockReader disagree on when to send/recv CHECKSUM_OK
 -

 Key: HDFS-1001
 URL: https://issues.apache.org/jira/browse/HDFS-1001
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Reporter: bc Wong
Assignee: bc Wong
Priority: Minor
 Fix For: 0.22.0, 0.23.0

 Attachments: HDFS-1001-2.patch, HDFS-1001-3.patch, HDFS-1001-3.patch, 
 HDFS-1001-4.patch, HDFS-1001-5.patch, HDFS-1001-6.patch, 
 HDFS-1001-rebased.patch, HDFS-1001.patch, HDFS-1001.patch.1


 Running the TestPread with additional debug statements reveals that the 
 BlockReader sends CHECKSUM_OK when the DataXceiver doesn't expect it. 
 Currently it doesn't matter since DataXceiver closes the connection after 
 each op, and CHECKSUM_OK is the last thing on the wire. But if we want to 
 cache connections, they need to agree on the exchange of CHECKSUM_OK.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1473) Refactor storage management into separate classes than fsimage file reading/writing

2010-11-19 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933864#action_12933864
 ] 

Todd Lipcon commented on HDFS-1473:
---

Unit test results:
[junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED (timeout) [ 
HDFS-1471]
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestStorageRestore 
FAILED [HDFS-1496]
[junit] Test org.apache.hadoop.hdfs.server.balancer.TestBalancer FAILED 
[HDFS-613]
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestBlockTokenWithDFS 
FAILED [HDFS-613]
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace 
FAILED [HDFS-1503]
[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery 
FAILED [HDFS-1502]

test-patch had one new findbugs issue, running through a slightly updated patch 
now.

 Refactor storage management into separate classes than fsimage file 
 reading/writing
 ---

 Key: HDFS-1473
 URL: https://issues.apache.org/jira/browse/HDFS-1473
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-1473-prelim.txt, hdfs-1473.txt


 Currently the FSImage class is responsible both for storage management (eg 
 moving around files, tracking file names, the VERSION file, etc) as well as 
 for the actual serialization and deserialization of the fsimage file within 
 the storage directory.
 I'd like to refactor the loading and saving code into new classes. This will 
 make testing easier and also make the major changes in HDFS-1073 easier to 
 understand.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1509) Resync discarded directories in fs.name.dir during saveNamespace command

2010-11-19 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933895#action_12933895
 ] 

dhruba borthakur commented on HDFS-1509:


Absolutely agree that constantly trying to write to  a failed directory will 
slow things down, I am not suggesting we do this. Instead, bin/hadoop dfsadmin 
-savenamespace is a command line utility and is likely to be run manually by 
an administrator; when this command is run the namenode saves its entire image 
from memory in fsimage (and truncates fsedits). I would like this operation to 
try writing the fsimage to all configured directories.

 Resync discarded directories in fs.name.dir during saveNamespace command
 

 Key: HDFS-1509
 URL: https://issues.apache.org/jira/browse/HDFS-1509
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 In the current implementation, if the Namenode encounters an error while 
 writing to a fs.name.dir directory it stops writing new edits to that 
 directory. My proposal is to make  the namenode write the fsimage to all 
 configured directories in fs.name.dir, and from then on, continue writing 
 fsedits to all configured directories.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1509) Resync discarded directories in fs.name.dir during saveNamespace command

2010-11-19 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933929#action_12933929
 ] 

Eli Collins commented on HDFS-1509:
---

Ah, thanks. In the description I think you mean fsimage rather than edits then.

 Resync discarded directories in fs.name.dir during saveNamespace command
 

 Key: HDFS-1509
 URL: https://issues.apache.org/jira/browse/HDFS-1509
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 In the current implementation, if the Namenode encounters an error while 
 writing to a fs.name.dir directory it stops writing new edits to that 
 directory. My proposal is to make  the namenode write the fsimage to all 
 configured directories in fs.name.dir, and from then on, continue writing 
 fsedits to all configured directories.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1167) [Herriot] New property for local conf directory in system-test-hdfs.xml file.

2010-11-19 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933932#action_12933932
 ] 

Konstantin Boudnik commented on HDFS-1167:
--

I am trying to understand the logic behind this patch. Why can't we use a 
system temp directory for these purposes? Why do we need a special variable 
just for the configs? Perhaps, I missing something?

 [Herriot] New property for local conf directory in system-test-hdfs.xml file.
 -

 Key: HDFS-1167
 URL: https://issues.apache.org/jira/browse/HDFS-1167
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: 0.21.0
Reporter: Vinay Kumar Thota
Assignee: Vinay Kumar Thota
 Attachments: HDFS-1167.patch, HDFS-1167.patch


 Adding new property in system-test.xml file for local configuration directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1167) [Herriot] New property for local conf directory in system-test-hdfs.xml file.

2010-11-19 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933975#action_12933975
 ] 

Konstantin Boudnik commented on HDFS-1167:
--

Ok, looks like the code for this has been committed to Common a long time ago. 
The change for the config hasn't come through somehow. There should be a 
similar change for MR.

As for putting this config thing into common: as much as it might seem suitable 
we can't do it because Common's part of Herriot doesn't have any config 
processing in it ;( In fact, I think we did a wrong thing when 
{{getHadoopLocalConfDir}} has been added to Common code. This has created 
indirect dependency to the config processing logic in upstream components 
(HDFS, MR).

Therefore, I think this patch (and its counterpart) can be committed to make 
the change complete for now. However, a new JIRA needs to be created to 
refactor this method away from the Common module.

 [Herriot] New property for local conf directory in system-test-hdfs.xml file.
 -

 Key: HDFS-1167
 URL: https://issues.apache.org/jira/browse/HDFS-1167
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: 0.21.0
Reporter: Vinay Kumar Thota
Assignee: Vinay Kumar Thota
 Attachments: HDFS-1167.patch, HDFS-1167.patch


 Adding new property in system-test.xml file for local configuration directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1476) listCorruptFileBlocks should be functional while the name node is still in safe mode

2010-11-19 Thread Patrick Kling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Kling updated HDFS-1476:


Attachment: HDFS-1476.4.patch

Updated test case to play nice with HDFS-1482.

 listCorruptFileBlocks should be functional while the name node is still in 
 safe mode
 

 Key: HDFS-1476
 URL: https://issues.apache.org/jira/browse/HDFS-1476
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Patrick Kling
Assignee: Patrick Kling
 Attachments: HDFS-1476.2.patch, HDFS-1476.3.patch, HDFS-1476.4.patch, 
 HDFS-1476.patch


 This would allow us to detect whether missing blocks can be fixed using Raid 
 and if that is the case exit safe mode earlier.
 One way to make listCorruptFileBlocks available before the name node has 
 exited from safe mode would be to perform a scan of the blocks map on each 
 call to listCorruptFileBlocks to determine if there are any blocks with no 
 replicas. This scan could be parallelized by dividing the space of block IDs 
 into multiple intervals than can be scanned independently.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1511) 98 Release Audit warnings on trunk and branch-0.22

2010-11-19 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12933995#action_12933995
 ] 

Konstantin Boudnik commented on HDFS-1511:
--

OpenOffice files (src/docs/src/documentation/resources/images/FI-framework.odg) 
should be going.
cli/clitest_data
src/test/smoke|commit|all-tests
src/test/*xml
webapps/**/*xml
src/docs/releasenotes.html
at the very least.

 98 Release Audit warnings on trunk and branch-0.22
 --

 Key: HDFS-1511
 URL: https://issues.apache.org/jira/browse/HDFS-1511
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Nigel Daley
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: releaseauditWarnings.txt


 There are 98 release audit warnings on trunk. See attached txt file. These 
 must be fixed or filtered out to get back to a reasonably small number of 
 warnings. The OK_RELEASEAUDIT_WARNINGS property in 
 src/test/test-patch.properties should also be set appropriately in the patch 
 that fixes this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1167) [Herriot] New property for local conf directory in system-test-hdfs.xml file.

2010-11-19 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1167:
-

Attachment: hdfs-1167.patch

Slightly polished description. 

 [Herriot] New property for local conf directory in system-test-hdfs.xml file.
 -

 Key: HDFS-1167
 URL: https://issues.apache.org/jira/browse/HDFS-1167
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: 0.21.0
Reporter: Vinay Kumar Thota
Assignee: Vinay Kumar Thota
 Attachments: hdfs-1167.patch, HDFS-1167.patch, HDFS-1167.patch


 Adding new property in system-test.xml file for local configuration directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1167) New property for local conf directory in system-test-hdfs.xml file.

2010-11-19 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1167:
-

  Environment: herriot
Affects Version/s: (was: 0.21.0)
   0.22.0
Fix Version/s: 0.22.0
  Summary: New property for local conf directory in 
system-test-hdfs.xml file.  (was: [Herriot] New property for local conf 
directory in system-test-hdfs.xml file.)

 New property for local conf directory in system-test-hdfs.xml file.
 ---

 Key: HDFS-1167
 URL: https://issues.apache.org/jira/browse/HDFS-1167
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: 0.22.0
 Environment: herriot
Reporter: Vinay Kumar Thota
Assignee: Vinay Kumar Thota
 Fix For: 0.22.0

 Attachments: hdfs-1167.patch, HDFS-1167.patch, HDFS-1167.patch


 Adding new property in system-test.xml file for local configuration directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1167) New property for local conf directory in system-test-hdfs.xml file.

2010-11-19 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1167:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk and branch-0.22. Thank you VInay

 New property for local conf directory in system-test-hdfs.xml file.
 ---

 Key: HDFS-1167
 URL: https://issues.apache.org/jira/browse/HDFS-1167
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: 0.22.0
 Environment: herriot
Reporter: Vinay Kumar Thota
Assignee: Vinay Kumar Thota
 Fix For: 0.22.0

 Attachments: hdfs-1167.patch, HDFS-1167.patch, HDFS-1167.patch


 Adding new property in system-test.xml file for local configuration directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1467) Append pipeline never succeeds with more than one replica

2010-11-19 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12934044#action_12934044
 ] 

Todd Lipcon commented on HDFS-1467:
---

Test results:
[junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED (timeout)
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestStorageRestore 
FAILED
[junit] Test org.apache.hadoop.hdfs.server.balancer.TestBalancer FAILED
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestBlockTokenWithDFS 
FAILED
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace FAILED
[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED

These are all failing on trunk as well, so I think this can be re-committed 
after a review.

 Append pipeline never succeeds with more than one replica
 -

 Key: HDFS-1467
 URL: https://issues.apache.org/jira/browse/HDFS-1467
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Attachments: failed-TestPipelines.txt, hdfs-1467-fixed.txt, 
 hdfs-1467.txt


 TestPipelines appears to be failing on trunk:
 Should be RBW replica after sequence of calls append()/write()/hflush() 
 expected:RBW but was:FINALIZED
 junit.framework.AssertionFailedError: Should be RBW replica after sequence of 
 calls append()/write()/hflush() expected:RBW but was:FINALIZED
 at 
 org.apache.hadoop.hdfs.TestPipelines.pipeline_01(TestPipelines.java:109)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-1467) Append pipeline never succeeds with more than one replica

2010-11-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-1467.
---

   Resolution: Fixed
Fix Version/s: 0.23.0
   0.22.0

I've committed this to trunk and branch 22. Thanks Todd.

 Append pipeline never succeeds with more than one replica
 -

 Key: HDFS-1467
 URL: https://issues.apache.org/jira/browse/HDFS-1467
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.22.0, 0.23.0

 Attachments: failed-TestPipelines.txt, hdfs-1467-fixed.txt, 
 hdfs-1467.txt


 TestPipelines appears to be failing on trunk:
 Should be RBW replica after sequence of calls append()/write()/hflush() 
 expected:RBW but was:FINALIZED
 junit.framework.AssertionFailedError: Should be RBW replica after sequence of 
 calls append()/write()/hflush() expected:RBW but was:FINALIZED
 at 
 org.apache.hadoop.hdfs.TestPipelines.pipeline_01(TestPipelines.java:109)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1448) Create multi-format parser for edits logs file, support binary and XML formats initially

2010-11-19 Thread Erik Steffl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12934099#action_12934099
 ] 

Erik Steffl commented on HDFS-1448:
---

HDFS-1448-0.22-3.patch implements changes suggested in review 
https://issues.apache.org/jira/browse/HDFS-1448?focusedCommentId=12931182page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12931182

Few notes:

XmlTokenizer.java public Token read(Token t): documented rationale in 
Tokenizer.java

TestOfflineEditsViewer.java Nit: Line 174, save a few characters with a while 
loop: function changed a bit, don't think it applies now

 Create multi-format parser for edits logs file, support binary and XML 
 formats initially
 

 Key: HDFS-1448
 URL: https://issues.apache.org/jira/browse/HDFS-1448
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Erik Steffl
 Fix For: 0.22.0

 Attachments: editsStored, HDFS-1448-0.22-1.patch, 
 HDFS-1448-0.22-2.patch, HDFS-1448-0.22-3.patch, HDFS-1448-0.22.patch, Viewer 
 hierarchy.pdf


 Create multi-format parser for edits logs file, support binary and XML 
 formats initially.
 Parsing should work from any supported format to any other supported format 
 (e.g. from binary to XML and from XML to binary).
 The binary format is the format used by FSEditLog class to read/write edits 
 file.
 Primary reason to develop this tool is to help with troubleshooting, the 
 binary format is hard to read and edit (for human troubleshooters).
 Longer term it could be used to clean up and minimize parsers for fsimage and 
 edits files. Edits parser OfflineEditsViewer is written in a very similar 
 fashion to OfflineImageViewer. Next step would be to merge OfflineImageViewer 
 and OfflineEditsViewer and use the result in both FSImage and FSEditLog. This 
 is subject to change, specifically depending on adoption of avro (which would 
 completely change how objects are serialized as well as provide ways to 
 convert files to different formats).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.