Hi,

g4hx wrote:
> On 07/02/2012 11:06 AM, Jean-Pierre André wrote:
>    
>>
>> Jean-Pierre André wrote:
>>      
>>> Hi,
>>>
>>> g4hx wrote:
>>>        
>>>> On 07/01/2012 10:40 PM, Jean-Pierre André wrote:
>>>>
>>>> g4hx wrote:
>>>>          
>>>>>> Hello everyone,
>>>>>>
>>>>>> I much appreciate your work on the ntfs-3g driver, I think that
>>>>>> ntfs is
>>>>>> the only file system that can be used on linux and windows together.
>>>>>> However, there seems to be a performance problem with the ntfs-3g
>>>>>> driver:
>>>>>> I get write rates of about 3 mb/s as opposed to about 90 mb/s with
>>>>>> different file system types on the same disk. Also I have a very high
>>>>>> CPU load, caused by the mount.ntfs process.
> Hi,
>
> well, essentially I mounted the file system and used "dd oflags=append
> conv=notrunc if=/dev/zeros of=./zeroes" to measure the write speed. I
> also mounted the partition with "-odebug,nodetach" and got a lot of
> these messages during the writing:
>
> WRITE[0] 512 bytes to 124918424064
>     WRITE[0] 512 bytes
>     unique: 28611, error: 0 (Success), outsize: 24
> unique: 28612, opcode: GETXATTR (22), nodeid: 11, insize: 68
>     unique: 28612, error: -61 (No data available), outsize: 16
> unique: 28613, opcode: WRITE (16), nodeid: 11, insize: 576
>    

This output is as expected, however it shows an awkward
buffer size resulting in much more context changes than
needed. You did not specify the "bs" parameter in dd,
resulting in the defaut 512. The default buffer size used
by ntfs-3g is 4096 bytes, same as your cluster size, so by
selecting bs=4096 you should get an easy improvement.
If you mount with option big_writes you can use buffers
up to the default kernel limit of 128K bytes.

The actual buffer size used is under the control of the
program which writes. It is easy to set for dd or tar and
for cp it is 64K. Check what your usual writer program
can do.

With bigger buffer sizes, you will get fewer context
changes and better throughput.

> I attached the inode information of the files "./zeroes" and inode 0.
> The file has already grown to a size of about 117G, which is why I think
> the output is so large.
>    

Your partition is healthy. Your big file is fragmented,
but most of the fragments have a length not far from
4096, which is not so bad.

> I mainly use this partition to store mp3s and avis, so after copying the
> files I did not do much with the partition except reading from it, so I
> don't think that there should be a fragmentation problem.
>    

It is undoubtedly a fragmentation issue, and you will
probably get similar results on Windows when using
the same buffer size. If this is a partition dedicated to
multimedia files, you should select a more appropriate
cluster size (probably the maximum of 65536), but to
do so you have to backup your files and format again.

The execution path for appending data to big files
has been improved last year. I will examine it again
under conditions similar to yours, but there is probably
no much more to gain if you are using ntfs-3g 2012.1.15

Regards

Jean-Pierre



------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
ntfs-3g-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel

Reply via email to