[ 
https://issues.apache.org/jira/browse/HDFS-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13933818#comment-13933818
 ] 

Chris Nauroth commented on HDFS-6097:
-------------------------------------

bq. This is actually an optimization I made.

I see.  Thanks for explaining.  Would you mind putting a comment in there?

bq. I guess I've started to skip doing this on unit tests.

I got into the try-finally habit during the Windows work.  On Windows, we'd 
have one test fail and leave the cluster running, because it wasn't doing 
shutdown.  Then, subsequent tests also would fail during initialization due to 
the more pessimistic file locking behavior on Windows.  The prior cluster still 
held locks on the test data directory, so the subsequent tests couldn't 
reformat.  The subsequent tests would have passed otherwise, so this had the 
effect of disrupting full test run reports with a lot of false failures.  It 
made it more difficult to determine exactly which test was really failing.

If the stack traces from close aren't helpful, then we can stifle them by 
calling {{IOUtils#cleanup}} and passing a null logger.

FWIW, my current favorite way to do this is cluster initialization in a 
{{BeforeClass}} method, cluster shutdown in an {{AfterClass}} method, and 
sometimes close of individual streams or file systems in an {{After}} method 
depending on what the test is doing.  This reigns in the code clutter of 
try-finally.  It's not always convenient though if you need to change 
{{Configuration}} in each test or if you need per-test isolation for some other 
reason.

> zero-copy reads are incorrectly disabled on file offsets above 2GB
> ------------------------------------------------------------------
>
>                 Key: HDFS-6097
>                 URL: https://issues.apache.org/jira/browse/HDFS-6097
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.4.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-6097.003.patch, HDFS-6097.004.patch
>
>
> Zero-copy reads are incorrectly disabled on file offsets above 2GB due to 
> some code that is supposed to disable zero-copy reads on offsets in block 
> files greater than 2GB (because MappedByteBuffer segments are limited to that 
> size).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to