Hi again,
Eric Biggers wrote:
> Hi,
>
> This is indeed a real problem for me; I discovered it via some new automated
> tests of wimlib. For extraction to an NTFS volume, I had code that assumed
> any
> return value from ntfs_attr_pwrite() other than the count that was passed in
> indicated an error. In practice I was usually passing a count of 32768 which
> happened to avoid the problem, but when dealing with highly compressed WIM
> archives the count could, in fact, be much larger.
What matters is writing across a compression block border...
>
> The solution for me is, of course, to keep calling ntfs_attr_pwrite() until
> all
> bytes have been written or a real error has occurred. However, I'm suggesting
> that something should also be done on the libntfs-3g side to make it harder
> for
> people to run into this problem, whether that is updating the documentation or
> updating ntfs_attr_pwrite() itself to always try to write the full count. I
> am
> also concerned about whether all internal callers of ntfs_attr_pwrite() in
> libntfs-3g itself handle short writes correctly. There are quite a few
> callers
> and it looks like most don't use retry loops; however, many callers probably
> either write small amounts of data only or rarely operate on compressed
> attributes, thereby avoiding short writes in practice.
No case of internal writing to compressed attributes
is known.
>
> What if the existing ntfs_attr_pwrite() was simply moved to an internal
> function, and ntfs_attr_pwrite() was written as a retry loop around the
> internal
> function?
Ok, I will do.
>
> Eric
>
> On Sat, Oct 31, 2015 at 07:06:01PM +0100, Jean-Pierre André wrote:
>> Hi Eric,
>>
>> Eric Biggers wrote:
>>> Hi,
>>>
>>> The return value of ntfs_attr_pwrite() is documented as follows:
>>>
>>>> On success, return the number of successfully written bytes. If this number
>>>> is lower than @count this means that an error was encountered during the
>>>> write so that the write is partial. 0 means nothing was written (also
>>>> return
>>>> 0 when @count is 0).
>>>
>>> Hence, a short count implies that an error occurred. However, I discovered
>>> that
>>> a short count may, in fact, be returned when successfully writing to a
>>> compressed attribute, since ntfs_attr_pwrite() truncates the count to a
>>> single
>>> compression block only:
>>>
>>>> if (compressed) {
>>>> fullcount = (pos | (na->compression_block_size - 1)) + 1 -
>>>> pos;
>>>> if (count > fullcount)
>>>> count = fullcount;
>>>> }
>>>
>>> There are two possible ways to fix this:
>>>
>>> 1) Update ntfs_attr_pwrite() to always try to write the full count
>>> 2) Update ntfs_attr_pwrite() documentation to clarify that short returns
>>> are allowed and applications should, generally, continue calling
>>> ntfs_attr_pwrite() until all bytes have been written
>>>
>>> It looks like the callers of ntfs_attr_pwrite() in the FUSE drivers do retry
>>> short writes, but this doesn't appear to be the case for all internal
>>> callers in
>>> libntfs-3g itself. So I think that option (1) is preferred, if it is at all
>>> possible.
>>
>> Actually, the current state is intentional, and motivated
>> by the complexity of allocating clusters for compressed
>> attributes. The promise should indeed have been mitigated
>> for compressed attributes (a subset of data attributes).
>>
>> If this is a problem (did you find a specific case ?), a
>> reasonable solution is to insert a new function for writing
>> to data attributes, which will repeat the call until done.
>>
>> Jean-Pierre
>>
>>>
>>> Eric
>>
>
------------------------------------------------------------------------------
_______________________________________________
ntfs-3g-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel