[
https://issues.apache.org/jira/browse/HADOOP-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12503911
]
Hadoop QA commented on HADOOP-1450:
-----------------------------------
Integrated in Hadoop-Nightly #119 (See
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/119/])
> checksums should be closer to data generation and consumption
> -------------------------------------------------------------
>
> Key: HADOOP-1450
> URL: https://issues.apache.org/jira/browse/HADOOP-1450
> Project: Hadoop
> Issue Type: Improvement
> Components: fs
> Reporter: Doug Cutting
> Assignee: Doug Cutting
> Fix For: 0.14.0
>
> Attachments: HADOOP-1450.patch
>
>
> ChecksumFileSystem checksums data by inserting a filter between two buffers.
> The outermost buffer should be as small as possible, so that, when writing,
> checksums are computed before the data has spent much time in memory, and,
> when reading, checksums are validated as close to their time of use as
> possible. Currently the outer buffer is the larger, using the bufferSize
> specified by the user, and the inner is small, so that most reads and writes
> will bypass it, as an optimization. Instead, the outer buffer should be made
> to be bytesPerChecksum, and the inner buffer should be the user-specified
> buffer size.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.