On Fri, Jul 3, 2009 at 5:07 PM, Jay Reynolds
Freeman wrote:
> On Jul 3, 2009, at 1:20 PM, Greg Guerin wrote:
>
>> [useful comments excised, thank you very much]
>
> I will try lseek and write at the end.
>
>> Exactly what problem is solved by initially writing multiple
>> gigabytes of zeros to disk
On Jul 3, 2009, at 1:20 PM, Greg Guerin wrote:
> [useful comments excised, thank you very much]
I will try lseek and write at the end.
> Exactly what problem is solved by initially writing multiple
> gigabytes of zeros to disk?
As for what I am doing, I have a parallel Scheme system (Wraith
Sc
Jay Reynolds Freeman wrote:
I have an app whose initialization includes writing a huge file
to disk -- think GigaBytes, or even tens of GigaBytes. I am
doing this in the context of setting up a large area of shared
memory with mmap, so the big write has to happen at
initialization, and it is ag
2009/7/3 Jay Reynolds Freeman :
> At the moment I am actually just using the C++ library "write"
> to do the writes, in a tight loop with a large buffer (50 MByte)
> full of zeros.
>
> Is there a way to optimize?
I'd memory-map the file and bzero() the space.
--
Igor
___
I am not sure whether this is a Cocoa question or a Darwin one.
I will take the discussion elsewhere if need be. Please advise.
I have an app whose initialization includes writing a huge file
to disk -- think GigaBytes, or even tens of GigaBytes. I am
doing this in the context of setting up a l