[ 
https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Corgan updated HBASE-5720:
-------------------------------

    Attachment: HBASE-5720-v2.patch

v2 patch also reverts the logic in HFileDataBlockEncoderImpl.createFromFileInfo 
to an earlier version.  Version on .94 branch will not allow disk->cache 
encoding on a .92 because (in short) it doesn't have an encoderId.  I'm pretty 
sure this is something we want to support, but correct me if i'm wrong.  
Without it you'd have to major compact everything before using encoding (or 
something along those lines)

Test suite passing (except unrelated failure), and it works in my benchmarking 
setup for HBASE-4676
                
> HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
> checksums
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-5720
>                 URL: https://issues.apache.org/jira/browse/HBASE-5720
>             Project: HBase
>          Issue Type: Bug
>          Components: io, regionserver
>    Affects Versions: 0.94.0
>            Reporter: Matt Corgan
>            Priority: Blocker
>             Fix For: 0.94.0
>
>         Attachments: HBASE-5720-v1.patch, HBASE-5720-v2.patch
>
>
> When reading a .92 HFile without checksums, encoding it, and storing in the 
> block cache, the HFileDataBlockEncoderImpl always allocates a dummy header 
> appropriate for checksums even though there are none.  This corrupts the 
> byte[].
> Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case 
> which I think is the desired behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to