On a cursory browse on that website, it seems that HDF5 is
implemented using java, is it true?  Who will start java vm?  by the
c wrapper or tarded by another process?

BTW I think you cannot process a 60GB file using Jmf unless you also got
at least 60GB RAM available.

Сбт, 14 Июл 2012, Konrad Hinsen писал(а):
> Eric Iverson writes:
> 
>  > I am not familiar with HDF5. For big text type files I would use J64 and
>  > memory map the big file to a noun. That is a start.
> 
> Lettow, Kenneth writes:
> 
>  > I have not worked with HDF5 files, but you should take a look at using
>  > memory mapped files in J.
>  > http://www.jsoftware.com/jwiki/Studio/Mapped%20Files
> 
> 
> I had a look at this and it looks quite simple and powerful - as long
> as you stay in the J universe. I also suspect that portability of
> those files between machines is safe to assume only for text-based
> versions.
> 
> So assume you have 60 GB of binary data you need to work with and
> archive for ten years - what do you do? That's not an academic
> question but my very real situation.
> 
> Right now I keep such data either in netCDF or in HDF5 files. Both are
> platform-neutral and stable binary file formats the let me store
> arrays of any data type. My 60 GB are single-precision floats, for
> example. I used netCDF in the past, but I am currently transitioning
> to HDF5 because of better performance and more flexible storage
> options.
> 
> Konrad.
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm

-- 
regards,
====================================================
GPG key 1024D/4434BAB3 2008-08-24
gpg --keyserver subkeys.pgp.net --recv-keys 4434BAB3
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to