I threw dispatch_io at a problem and was amazed at the result: superb 
performance! My only question is about memory usage.

In my case I am only doing streamed writes to a file. Since dispatch_io_write 
is async, there's the potential for me to supply data to dispatch_io faster 
than my hard drive can write it out which means that memory starts to stack up. 
What's preventing that from running away? I can generated gigabyte of data in 
mere seconds but it'll take the hard drive a lot longer to write it out, and I 
somehow want to know I need to stall data generation while dispatch_io catches 
up with actually writing to the file.

The low and high water marks don't control how much data is in memory at any 
given time, but simply how when the io handler block is called based on the 
amount of data in memory. During writes, the dispatch_data_t passed to the io 
handler represents how much data remains to be written.

One of my thoughts is to set the handler interval and check for the data size 
to be over my memory limit and if it is, set a flag and stall generation until 
it drops back below the memory limit which I would know the next time the 
handler is called and the data size is below the limit. I'm not sure if it this 
is a *good* strategy, but I can't think of anything else.


--
Seth Willits




_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to