[jira] [Updated] (HDFS-2815) Namenode is not coming out of safemode when we perform ( NN crash + restart ) . Also FSCK report shows blocks missed.

2012-07-22 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HDFS-2815:
-

Target Version/s: 2.0.0-alpha, 1.2.0, 3.0.0  (was: 1.1.0, 2.0.0-alpha, 
3.0.0)
   Fix Version/s: (was: 0.23.2)
  (was: 0.24.0)
  2.0.0-alpha
  3.0.0

Updated Fix Versions to match @Robert's changes to Target Versions.
Changed Target Version 1.1.0 to 1.2.0, since the branch-1 patch was not 
reviewed and committed in time for 1.1.0.
Please do proceed with the port to branch-1.  Thanks.

> Namenode is not coming out of safemode when we perform ( NN crash + restart ) 
> .  Also FSCK report shows blocks missed.
> --
>
> Key: HDFS-2815
> URL: https://issues.apache.org/jira/browse/HDFS-2815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 0.23.1, 1.0.0, 1.1.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Fix For: 2.0.0-alpha, 3.0.0
>
> Attachments: HDFS-2815-22-branch.patch, HDFS-2815-Branch-1.patch, 
> HDFS-2815.patch, HDFS-2815.patch
>
>
> When tested the HA(internal) with continuous switch with some 5mins gap, 
> found some *blocks missed* and namenode went into safemode after next switch.
>
>After the analysis, i found that this files already deleted by clients. 
> But i don't see any delete commands logs namenode log files. But namenode 
> added that blocks to invalidateSets and DNs deleted the blocks.
>When restart of the namenode, it went into safemode and expecting some 
> more blocks to come out of safemode.
>Here the reason could be that, file has been deleted in memory and added 
> into invalidates after this it is trying to sync the edits into editlog file. 
> By that time NN asked DNs to delete that blocks. Now namenode shuts down 
> before persisting to editlogs.( log behind)
>Due to this reason, we may not get the INFO logs about delete, and when we 
> restart the Namenode (in my scenario it is again switch), Namenode expects 
> this deleted blocks also, as delete request is not persisted into editlog 
> before.
>I reproduced this scenario with bedug points. *I feel, We should not add 
> the blocks to invalidates before persisting into Editlog*. 
> Note: for switch, we used kill -9 (force kill)
>   I am currently in 0.20.2 version. Same verified in 0.23 as well in normal 
> crash + restart  scenario.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3568) fuse_dfs: add support for security

2012-07-22 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HDFS-3568:
-

Target Version/s: 2.1.0-alpha  (was: 1.1.0, 2.1.0-alpha)

Opened HDFS-3700 for port to 1.2.0, so this jira can be properly closed.

> fuse_dfs: add support for security
> --
>
> Key: HDFS-3568
> URL: https://issues.apache.org/jira/browse/HDFS-3568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.1.0-alpha
>
> Attachments: HDFS-3568.001.patch, HDFS-3568.002.patch, 
> HDFS-3568.003.patch, HDFS-3568.004.patch, HDFS-3568.005.patch
>
>
> fuse_dfs should have support for Kerberos authentication.  This would allow 
> FUSE to be used in a secure cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3700) Backport HDFS-3568 to branch-1 (fuse_dfs: add support for security)

2012-07-22 Thread Matt Foley (JIRA)
Matt Foley created HDFS-3700:


 Summary: Backport HDFS-3568 to branch-1 (fuse_dfs: add support for 
security)
 Key: HDFS-3700
 URL: https://issues.apache.org/jira/browse/HDFS-3700
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Matt Foley


fuse_dfs should have support for Kerberos authentication. This would allow FUSE 
to be used in a secure cluster.  Fixed for branch-2 in HDFS-3568.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3698) TestHftpFileSystem is failing in branch-1 due to changed default secure port

2012-07-22 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420439#comment-13420439
 ] 

Eli Collins commented on HDFS-3698:
---

+1  lgtm as well

> TestHftpFileSystem is failing in branch-1 due to changed default secure port
> 
>
> Key: HDFS-3698
> URL: https://issues.apache.org/jira/browse/HDFS-3698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3698.patch
>
>
> This test is failing since the default secure port changed to the HTTP port 
> upon the commit of HDFS-2617.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3667) Add retry support to WebHdfsFileSystem

2012-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420413#comment-13420413
 ] 

Hadoop QA commented on HDFS-3667:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12537538/h3667_20120722.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 5 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2886//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2886//console

This message is automatically generated.

> Add retry support to WebHdfsFileSystem
> --
>
> Key: HDFS-3667
> URL: https://issues.apache.org/jira/browse/HDFS-3667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3667_20120718.patch, h3667_20120721.patch, 
> h3667_20120722.patch
>
>
> DFSClient (i.e. DistributedFileSystem) has a configurable retry policy and it 
> retries on exceptions such as connection failure, safemode.  
> WebHdfsFileSystem should have similar retry support.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3698) TestHftpFileSystem is failing in branch-1 due to changed default secure port

2012-07-22 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420412#comment-13420412
 ] 

Todd Lipcon commented on HDFS-3698:
---

Looks good to me. +1

> TestHftpFileSystem is failing in branch-1 due to changed default secure port
> 
>
> Key: HDFS-3698
> URL: https://issues.apache.org/jira/browse/HDFS-3698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3698.patch
>
>
> This test is failing since the default secure port changed to the HTTP port 
> upon the commit of HDFS-2617.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3696) FsShell put using WebHdfsFileSystem goes OOM when file size is big

2012-07-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420402#comment-13420402
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3696:
--

I also noticed this problem when I tested WebHdfsFileSystem with 3GB files in 
HDFS-3671.  After added HttpURLConnection.setChunkedStreamingMode(..), the test 
ran well.  Since it is only an one-line change, I will add it with the retry 
patch (HDFS-3667).

> FsShell put using WebHdfsFileSystem goes OOM when file size is big
> --
>
> Key: HDFS-3696
> URL: https://issues.apache.org/jira/browse/HDFS-3696
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Kihwal Lee
>Priority: Critical
> Fix For: 0.23.3, 3.0.0, 2.2.0-alpha
>
>
> When doing "fs -put" to a WebHdfsFileSystem (webhdfs://), the FsShell goes 
> OOM if the file size is large. When I tested, 20MB files were fine, but 200MB 
> didn't work.  
> I also tried reading a large file by issuing "-cat" and piping to a slow sink 
> in order to force buffering. The read path didn't have this problem. The 
> memory consumption stayed the same regardless of progress.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3667) Add retry support to WebHdfsFileSystem

2012-07-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3667:
-

Attachment: h3667_20120722.patch

> Add retry support to WebHdfsFileSystem
> --
>
> Key: HDFS-3667
> URL: https://issues.apache.org/jira/browse/HDFS-3667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3667_20120718.patch, h3667_20120721.patch, 
> h3667_20120722.patch
>
>
> DFSClient (i.e. DistributedFileSystem) has a configurable retry policy and it 
> retries on exceptions such as connection failure, safemode.  
> WebHdfsFileSystem should have similar retry support.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3667) Add retry support to WebHdfsFileSystem

2012-07-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3667:
-

Attachment: (was: h3667_20120722.patch)

> Add retry support to WebHdfsFileSystem
> --
>
> Key: HDFS-3667
> URL: https://issues.apache.org/jira/browse/HDFS-3667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3667_20120718.patch, h3667_20120721.patch
>
>
> DFSClient (i.e. DistributedFileSystem) has a configurable retry policy and it 
> retries on exceptions such as connection failure, safemode.  
> WebHdfsFileSystem should have similar retry support.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3667) Add retry support to WebHdfsFileSystem

2012-07-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3667:
-

Attachment: h3667_20120722.patch

h3667_20120722.patch: fixes some bugs.

> Add retry support to WebHdfsFileSystem
> --
>
> Key: HDFS-3667
> URL: https://issues.apache.org/jira/browse/HDFS-3667
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3667_20120718.patch, h3667_20120721.patch, 
> h3667_20120722.patch
>
>
> DFSClient (i.e. DistributedFileSystem) has a configurable retry policy and it 
> retries on exceptions such as connection failure, safemode.  
> WebHdfsFileSystem should have similar retry support.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3530) TestFileAppend2.testComplexAppend occasionally fails

2012-07-22 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420345#comment-13420345
 ] 

Todd Lipcon commented on HDFS-3530:
---

I agree it's not a great _unit_ test case for the reason you mentioned, but 
deleting it without understanding why it fails sometimes doesn't seem right. It 
acts as a decent stress-test/functional test, and it's still worrisome that it 
fails.

> TestFileAppend2.testComplexAppend occasionally fails
> 
>
> Key: HDFS-3530
> URL: https://issues.apache.org/jira/browse/HDFS-3530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Eli Collins
>Assignee: Tomohiko Kinebuchi
> Attachments: HDFS-3530-for-debug.txt, HDFS-3530.txt, 
> PreCommit-HADOOP-Build #1116 test - testComplexAppend.html.gz
>
>
> TestFileAppend2.testComplexAppend occasionally fails with the following:
> junit.framework.AssertionFailedError: testComplexAppend Worker encountered 
> exceptions.
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:385)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-07-22 Thread eric baldeschwieler (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420307#comment-13420307
 ] 

eric baldeschwieler commented on HDFS-2617:
---

Yupp.  We leave 2.0 as is, without KSSL.  The 0.23 guys can choose to patch or 
not.


I've created HDFS-3699 - HftpFileSystem should try both KSSL and SPNEGO when 
authentication is required.  I'll muster up some help with the implementation.

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 1.2.0, 2.1.0-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-branch-1.patch, HDFS-2617-branch-1.patch, HDFS-2617-branch-1.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3699) HftpFileSystem should try both KSSL and SPNEGO when authentication is required

2012-07-22 Thread eric baldeschwieler (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eric baldeschwieler updated HDFS-3699:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-2617

> HftpFileSystem should try both KSSL and SPNEGO when authentication is required
> --
>
> Key: HDFS-3699
> URL: https://issues.apache.org/jira/browse/HDFS-3699
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: eric baldeschwieler
>
> See discussion in HDFS-2617 (Replaced Kerberized SSL for image transfer and 
> fsck with SPNEGO-based solution).
> To handle the transition from Hadoop1.0 systems running KSSL authentication 
> to Hadoop systems running SPNEGO, it would be good to fix the client in both 
> 1 and 2 to try SPNEGO and then fall back to try KSSL.  
> This will allow organizations that are running a lot of Hadoop 1.0 to 
> gradually transition over, without needing to convert all clusters at the 
> same time.  They would first need to update their 1.0 HFTP clients (and 
> 2.0/0.23 if they are already running those) and then they could copy data 
> between clusters without needing to move all clusters to SPNEGO in a big bang.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3699) HftpFileSystem should try both KSSL and SPNEGO when authentication is required

2012-07-22 Thread eric baldeschwieler (JIRA)
eric baldeschwieler created HDFS-3699:
-

 Summary: HftpFileSystem should try both KSSL and SPNEGO when 
authentication is required
 Key: HDFS-3699
 URL: https://issues.apache.org/jira/browse/HDFS-3699
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: eric baldeschwieler


See discussion in HDFS-2617 (Replaced Kerberized SSL for image transfer and 
fsck with SPNEGO-based solution).

To handle the transition from Hadoop1.0 systems running KSSL authentication to 
Hadoop systems running SPNEGO, it would be good to fix the client in both 1 and 
2 to try SPNEGO and then fall back to try KSSL.  

This will allow organizations that are running a lot of Hadoop 1.0 to gradually 
transition over, without needing to convert all clusters at the same time.  
They would first need to update their 1.0 HFTP clients (and 2.0/0.23 if they 
are already running those) and then they could copy data between clusters 
without needing to move all clusters to SPNEGO in a big bang.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2554) Add separate metrics for missing blocks with desired replication level 1

2012-07-22 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-2554:


 Target Version/s: 2.1.0-alpha
Affects Version/s: 2.0.0-alpha

> Add separate metrics for missing blocks with desired replication level 1
> 
>
> Key: HDFS-2554
> URL: https://issues.apache.org/jira/browse/HDFS-2554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Andy Isaacson
>Priority: Minor
>
> Some users use replication level set to 1 for datasets which are unimportant 
> and can be lost with no worry (eg the output of terasort tests). But other 
> data on the cluster is important and should not be lost. It would be useful 
> to separate the metric for missing blocks by the desired replication level of 
> those blocks, so that one could ignore missing blocks at repl 1 while still 
> alerting on missing blocks with higher desired replication.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3530) TestFileAppend2.testComplexAppend occasionally fails

2012-07-22 Thread Tomohiko Kinebuchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomohiko Kinebuchi updated HDFS-3530:
-

Attachment: HDFS-3530.txt

a patch file for HDFS-3530

> TestFileAppend2.testComplexAppend occasionally fails
> 
>
> Key: HDFS-3530
> URL: https://issues.apache.org/jira/browse/HDFS-3530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Eli Collins
>Assignee: Tomohiko Kinebuchi
> Attachments: HDFS-3530-for-debug.txt, HDFS-3530.txt, 
> PreCommit-HADOOP-Build #1116 test - testComplexAppend.html.gz
>
>
> TestFileAppend2.testComplexAppend occasionally fails with the following:
> junit.framework.AssertionFailedError: testComplexAppend Worker encountered 
> exceptions.
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:385)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3530) TestFileAppend2.testComplexAppend occasionally fails

2012-07-22 Thread Tomohiko Kinebuchi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420181#comment-13420181
 ] 

Tomohiko Kinebuchi commented on HDFS-3530:
--

Attached a patch and its intent is described below.


The first error message is "Workload exception 4 testfile /15.dat 
java.io.EOFException: Premature EOF: no length prefix available", and after 
this moment all other Workload threads failed.

The essential problem on this test failure is unclearness of the objective of 
this test case and difficulties to understand what ever happened.
At least we should clear what error case we want to capture, then we are 
prepared to implement a test case.

So, I propose to delete this test case.

> TestFileAppend2.testComplexAppend occasionally fails
> 
>
> Key: HDFS-3530
> URL: https://issues.apache.org/jira/browse/HDFS-3530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Eli Collins
>Assignee: Tomohiko Kinebuchi
> Attachments: HDFS-3530-for-debug.txt, PreCommit-HADOOP-Build #1116 
> test - testComplexAppend.html.gz
>
>
> TestFileAppend2.testComplexAppend occasionally fails with the following:
> junit.framework.AssertionFailedError: testComplexAppend Worker encountered 
> exceptions.
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at 
> org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:385)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira