[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12841429#action_12841429
 ] 

Hudson commented on MAPREDUCE-1510:
---

Integrated in Hadoop-Mapreduce-trunk #248 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk/248/])


 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12840300#action_12840300
 ] 

Hadoop QA commented on MAPREDUCE-1510:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12437287/MAPREDUCE-1510.2.patch
  against trunk revision 918037.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 12 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/11/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/11/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/11/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h4.grid.sp2.yahoo.net/11/console

This message is automatically generated.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-02 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12840317#action_12840317
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


As in my test execution, the only test that failed at Hudson was 
org.apache.hadoop.mapred.TestMiniMRLocalFS.testWithLocal, which is not related 
to this patch and is already broken in trunk.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12840353#action_12840353
 ] 

Hudson commented on MAPREDUCE-1510:
---

Integrated in Hadoop-Mapreduce-trunk-Commit #254 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/254/])
. RAID should regenerate parity files if they get deleted.
(Rodrigo Schmidt via dhruba)


 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-01 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12839921#action_12839921
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Hudson is taking too long to generate a report on this one, so I'm doing the 
testing myself.

ant test-patch returned the following:

 [exec] There appear to be 0 release audit warnings before the patch and 0 
release audit warnings after applying the patch.
 [exec] 
 [exec] 
 [exec] 
 [exec] 
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 12 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] 
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==

Now I'm running the unit tests


 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-01 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12839952#action_12839952
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Passed all unit tests except

[junit] Test org.apache.hadoop.mapred.TestMiniMRLocalFS FAILED

But this one is broken in trunk and I don't modify anything related to it, so 
it doesn't count.

Now I'm running the contrib tests.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-03-01 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12839968#action_12839968
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Passed all contrib unit tests.

I also verified the logs and confirmed that the RaidNode was binding to 
random free ports, different from the default one.

This patch should be fine to be committed, if it passes human review.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, 
 MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12837212#action_12837212
 ] 

Hadoop QA commented on MAPREDUCE-1510:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12436685/MAPREDUCE-1510.1.patch
  against trunk revision 915223.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/476/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/476/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/476/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/476/console

This message is automatically generated.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-23 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12837611#action_12837611
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


I really don't know what is going on with Hudson.

It is failing a MiniMR test that is completely unrelated to this patch, but it 
doesn't fail any unit test.

The logs say it is failing contrib unit tests because some services are trying 
to bind to used ports, but it looks like a problem with Hudson more than a 
problem with my patch.

I did a full ant test-patch, followed by an ant test, followed by an ant 
test-contrib, and the only thing that failed was the unrelated MiniMR test.

I'm quite conviced this patch is harmless to trunk. Here is the final part of 
the output from my ant test-contrib execution

...
PASSED ALL PREVIOUS CONTRIB TESTS
...
test-junit:
[junit] Running org.apache.hadoop.hdfs.TestRaidDfs
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 5.377 sec
[junit] Running org.apache.hadoop.raid.TestRaidHar
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 45.107 sec
[junit] Running org.apache.hadoop.raid.TestRaidNode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 61.432 sec
[junit] Running org.apache.hadoop.raid.TestRaidPurge
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 23.123 sec

test:

BUILD SUCCESSFUL


 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-23 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12837613#action_12837613
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


When I said

It is failing a MiniMR test that is completely unrelated to this patch, but it 
doesn't fail any unit test.

Please read

TRUNK is failing a MiniMR test that is completely unrelated to this patch, but 
MY PATCH doesn't fail any unit test ON MY LOCAL EXECUTIONS.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-23 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12837619#action_12837619
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Sorry again...

My patch doesn't fail any contrib unit test on my local executions. The MiniMR 
test still fails because it's broken in trunk.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-21 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12836495#action_12836495
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Dhruba,

I actually found a bug when I added the new unit test. I'll create a new JIRA 
for that since other people might want to search for it directly.

Thanks!

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-19 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12835652#action_12835652
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Hi, Dhruba!

I just looked at the unit tests and it doesn't seem like we have this test that 
you mentioned.
But you are right! In this scenario the file also gets re-raided. I didn't 
change that behavior.

I just allowed files to be re-raided in case they lose their parity files for 
some reason.


 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-19 Thread Rodrigo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12835661#action_12835661
 ] 

Rodrigo Schmidt commented on MAPREDUCE-1510:


Sure! I will do that and resubmit the patch tomorrow.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted

2010-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12835687#action_12835687
 ] 

Hadoop QA commented on MAPREDUCE-1510:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12436294/MAPREDUCE-1510.patch
  against trunk revision 911519.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/465/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/465/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/465/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-h6.grid.sp2.yahoo.net/465/console

This message is automatically generated.

 RAID should regenerate parity files if they get deleted
 ---

 Key: MAPREDUCE-1510
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/raid
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt
 Attachments: MAPREDUCE-1510.patch


 Currently, if a source file has a replication factor lower or equal to that 
 expected by RAID, the file is skipped and no parity file is generated. I 
 don't think this is a good behavior since parity files can get wrongly 
 deleted, leaving the source file with a low replication factor. In that case, 
 raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.