[Numpy-discussion] Re: writing a known-size 1D ndarray serially as it's calced

2022-08-25 Thread Robert Kern
On Thu, Aug 25, 2022 at 4:27 AM Bill Ross wrote: > Thanks, np.lib.format.open_memmap() works great! With prediction procs > using minimal sys memory, I can get twice as many on GPU, with fewer > optimization warnings. > > Why even have the number of records in the header? Shouldn't record size >

[Numpy-discussion] Re: writing a known-size 1D ndarray serially as it's calced

2022-08-25 Thread Bill Ross
Thanks, np.lib.format.open_memmap() works great! With prediction procs using minimal sys memory, I can get twice as many on GPU, with fewer optimization warnings. Why even have the number of records in the header? Shouldn't record size plus system-reported/growable file size be enough? I'd lov

[Numpy-discussion] Re: writing a known-size 1D ndarray serially as it's calced

2022-08-23 Thread Michael Siebert
Hi all, I‘ve made the Pip/Conda module npy-append-array for exactly this purpose, see https://github.com/xor2k/npy-append-array It works with one dimensional arrays, too, of course. The key challange is to properly initialize and update the header accordingly as the array grows which my module

[Numpy-discussion] Re: writing a known-size 1D ndarray serially as it's calced

2022-08-23 Thread Robert Kern
On Tue, Aug 23, 2022 at 8:47 PM wrote: > I want to calc multiple ndarrays at once and lack memory, so want to write > in chunks (here sized to GPU batch capacity). It seems there should be an > interface to write the header, then write a number of elements cyclically, > then add any closing rubri