Den 8 nov. 2016 4:50 em skrev "Samer Afach" <[email protected]>:
>
> Hi all:
>
> The fact that HDF5 is not thread-safe is horrible actually and should be
fixed some time. The current model is more compatible with the 90s
computing models, not the current one. Actually I learned that HDF5 is not
thread-safe the hard way, and it ruined many of my plans for the use of the
library. I don't like/trust having build options that make my program
works, and I believe more in separating source writing from building, which
is why I decided I'm not going to use the --enable-threadsafe option, and
decided to use my own locks with std::thread (or boost::thread if you don't
use C++11). I just wrapped every H5 call with a
std::unique_lock<std::mutex>, and that solved the problem. For me this
wasn't a big change, because I use C++ classes and templates and
compile-time objects and I have a class with all the H5 functions I needed,
which were not too many. It took me like an hour to fix my program, and
it's working now.
>
> Just wanted to share my experience.

I agree that HDF5s thread safety should be improved. The approach you
describe with a mutex is essentially the same approach I use right now in
my C++ code. The reason I was considering switching to the C API is that my
usage is also pretty simple, so the pain of using the slightly more complex
C interface wouldn't be that bad, and I could leave the ugly locking in the
hands of the C library.

Anyway, thanks for sharing!

And yes, the lack of proper thread safety is starting to become a problem
for me as well, as I need to read some semi-big datasets (separate files
even) in my GUI, sometimes from slow-ish network mounts, and would rather
do that truly in parallel to minimize waiting time for my users.

Elvis

>
> Cheers,
> Samer Afach
>
>
> On 08.11.2016 4:02 PM, Elvis Stansvik wrote:
>>
>> 2016-11-08 15:46 GMT+01:00 Elvis Stansvik <[email protected]>:
>>>
>>> 2016-09-23 7:55 GMT+02:00 Elvis Stansvik <[email protected]>:
>>>>
>>>> 2016-09-22 21:27 GMT+02:00 Werner Benger <[email protected]>:
>>>> > On 22.09.2016 20:34, Elvis Stansvik wrote:
>>>> >
>>>> >> 2016-09-22 20:25 GMT+02:00 Werner Benger <[email protected]>:
>>>> >>>
>>>> >>> There was some recent discussion that only the C API of HDF5 is
>>>> >>> threadsafe,
>>>> >>> but not the C++ layer on top of it. To be safe you should probably
keep
>>>> >>> at
>>>> >>> the C API.
>>>> >>
>>>> >> Aha. Thanks for the heads up Werner.
>>>> >>
>>>> >> The FAQ has this to say:
>>>> >>
>>>> >>      "By default, you cannot build either Parallel HDF5 with C++ or
>>>> >> Parallel HDF5 with the thread-safe feature. You will receive a
>>>> >> configure error if you try either of these combinations."
>>>> >>
>>>> >> But it does not say that C++ with the thread-safe is unsupported
(when
>>>> >> using serial HDF5). Maybe the FAQ should be updated?
>>>> >>
>>>> >> This is quite unfortunate since our app is C++ and the C++ API is so
>>>> >> much more convenient, but I guess I'll port our code to the C API
>>>> >> (it's not that much code).
>>>> >
>>>> > What you probably could do as well is to have your own lock with one
global
>>>> > mutex around each call to the C++ API. HDF5 itself allows only one
running
>>>> > thread internally as well and just puts such a lock inside the API,
so it
>>>> > would come to the same.
>>>>
>>>> This is what I ended up doing, so thanks a lot for the suggestion.
>>>>
>>>> It was much easier than I thought because, since I was using Qt
>>>> anyway, I could just create a global QMutex, and then create a
>>>> QMutexLocker (RAII-style mutex locker) on the stack in each function
>>>> (just two of them) where I use HDF5.
>>>>
>>>> This sort of coarse locking is OK for me since overwhelming majority
>>>> of time is spent in a HDF5 H5::DataSpace::read(..) call (I'm reading
>>>> big compressed data). The other HDF5 calls are very quick by
>>>> comparison.
>>>
>>>
>>> Just a final question about this. The FAQ states that the threadsafe
guarantee does not extent to the high-level API.
>>>
>>> I just want to confirm that this is really the case: Is the threadsafe
guarantee only for the low-level C API?
>>>
>>> The reason I'm asking is that I'm now considering porting my HDF5 usage
from C++ to C, to avoid having to do the manual locking myself, and the
high-level API looked quite convenient for what I want to do. But if it's
really not threadsafe, I'll stick to the regular C API.
>>
>>
>> Nevermind, I found a very definitive answer in the 1.8.16 release notes,
where the --enable-threadsafe --enable-hl combination was marked as
unsupported in the build system:
>>
>>     - The thread-safety + high-level library combination has been marked
>>       as "unsupported" in the Autotools
>>
>>       The global lock used by the thread-safety feature has never been
>>       raised to the high-level library level, making it possible that
>>       the library state could change if a context switch were to occur in
>>       a high-level library call. Because of this, the combination of
>>       thread-safety and high-level library is officially unsupported by
>>       The HDF Group.
>>
>>       In the past, although this combination has never been supported,
this
>>       was not enforced by the build systems. These changes will cause an
>>       Autotools configure step to fail if --enable-threadsafe and
>>       --enable-hl are combined unless additional options are specified.
>>       Since the high-level library is built by default, this means that
>>       these extra configuration options will need to be used any time
>>       --enable-threadsafe is selected.
>>
>>       To build with --enable-threadsafe, either:
>>
>>       1) Use --disable-hl to disable the high-level library (recommended)
>>
>>       2) Use --enable-unsupported to build the high-level library with
>>          the thread-safety feature.
>>
>> So I'll stick to the low-level C API.
>>
>> Elvis
>>
>>>
>>> Thanks in advance,
>>> Elvis
>>>
>>>>
>>>>
>>>> Elvis
>>>>
>>>> >
>>>> >              Werner
>>>> >
>>>> >
>>>> >
>>>> >> So far we haven't had any problems under Ubuntu (where the package
is
>>>> >> built with --enable-threadsafe --enable-unsupported), but I guess
>>>> >> we've just been lucky.
>>>> >>
>>>> >> Elvis
>>>> >>
>>>> >>> Cheers,
>>>> >>>
>>>> >>>             Werner
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On 22.09.2016 20:15, Elvis Stansvik wrote:
>>>> >>>>
>>>> >>>> 2016-09-22 19:58 GMT+02:00 Scot Breitenfeld <[email protected]
>:
>>>> >>>>>
>>>> >>>>> Yes it is still the case that you cannot enable C++ or Fortran
(or the
>>>> >>>>> High Level APIs) when threadsafe is enabled. —enable-unsupported
can
>>>> >>>>> override this behavior.
>>>> >>>>
>>>> >>>> Aha, so that's why I had to enable that option :)
>>>> >>>>
>>>> >>>> I see now that this is what the Ubuntu package does. I'll ask the
Arch
>>>> >>>> package maintainer if he's willing to do the same.
>>>> >>>>
>>>> >>>> I've confirmed now that I don't have any problems with my program
if I
>>>> >>>> re-built the Arch package with --enable-threadsafe
>>>> >>>> --enable-unsupported.
>>>> >>>>
>>>> >>>> Thanks for the info!
>>>> >>>>
>>>> >>>> Elvis
>>>> >>>>
>>>> >>>>> Scot
>>>> >>>>>
>>>> >>>>>
>>>> >>>>>> On Sep 22, 2016, at 12:36 PM, Elvis Stansvik
>>>> >>>>>> <[email protected]> wrote:
>>>> >>>>>>
>>>> >>>>>> 2016-09-22 19:23 GMT+02:00 Elvis Stansvik
>>>> >>>>>> <[email protected]>:
>>>> >>>>>>>
>>>> >>>>>>> 2016-09-22 19:17 GMT+02:00 Dana Robinson <[email protected]
>:
>>>> >>>>>>>>
>>>> >>>>>>>> Hi Elvis,
>>>> >>>>>>>>
>>>> >>>>>>>> Did you build your HDF5 library with thread-safety enabled
>>>> >>>>>>>> (--enable-threadsafe w/ configure)?
>>>> >>>>>>>
>>>> >>>>>>> Hi Dana, and thanks for the quick reply. I think we just
e-mailed
>>>> >>>>>>> past
>>>> >>>>>>> each other (see my previous mail).
>>>> >>>>>>>
>>>> >>>>>>> I wrongly called it --thread-safe in that mail, but it was
>>>> >>>>>>> --enable-threadsafe I was referring to. But yes, I'm pretty
sure this
>>>> >>>>>>> is the problem.
>>>> >>>>>>>
>>>> >>>>>>> I'm rebuilding the Arch package now with --enable-threadsafe.
>>>> >>>>>>
>>>> >>>>>> I spoke a little too soon. I now found this bug filed against
the Arch
>>>> >>>>>> package:
>>>> >>>>>>
>>>> >>>>>>      https://bugs.archlinux.org/task/33805
>>>> >>>>>>
>>>> >>>>>> The reporter asked the package maintainer to add
--enable-threadsafe,
>>>> >>>>>> but the package maintainer closed the bug saying that
>>>> >>>>>> --enable-threadsafe is not compatible with the Fortran build
(in Arch,
>>>> >>>>>> the C++ and Fortran APIs are bundled into one package
>>>> >>>>>> hdf5-cpp-fortran).
>>>> >>>>>>
>>>> >>>>>> Anyone know if that is still the case? If so I can't open a bug
>>>> >>>>>> against the package again asking for --enable-threadsafe to be
added.
>>>> >>>>>> But I could open a bug asking the package to be split I guess.
>>>> >>>>>>
>>>> >>>>>> Elvis
>>>> >>>>>>
>>>> >>>>>>> Elvis
>>>> >>>>>>>
>>>> >>>>>>>> Dana Robinson
>>>> >>>>>>>> Software Engineer
>>>> >>>>>>>> The HDF Group
>>>> >>>>>>>>
>>>> >>>>>>>> Get Outlook for Android
>>>> >>>>>>>>
>>>> >>>>>>>> From: Elvis Stansvik
>>>> >>>>>>>> Sent: Thursday, September 22, 12:43
>>>> >>>>>>>> Subject: [Hdf-forum] Simply using the library from separate
threads
>>>> >>>>>>>> (C++
>>>> >>>>>>>> API)
>>>> >>>>>>>> To: HDF Users Discussion List
>>>> >>>>>>>>
>>>> >>>>>>>> Hi all, I'm using the C++ API to read HDF5 files from separate
>>>> >>>>>>>> threads
>>>> >>>>>>>> (no
>>>> >>>>>>>> writing). None of my threads read the same file, but they do
execute
>>>> >>>>>>>> simultaneously. The reason I'm using threading is not to speed
>>>> >>>>>>>> things
>>>> >>>>>>>> up or
>>>> >>>>>>>> get better throughput, but simply to not block the UI (it's Qt
>>>> >>>>>>>> application)
>>>> >>>>>>>> while reading. So this is not about "Parallel HDF5" or
anything,
>>>> >>>>>>>> just
>>>> >>>>>>>> simply
>>>> >>>>>>>> using the serial library "from scratch" from multiple
threads. This
>>>> >>>>>>>> has been
>>>> >>>>>>>> working fine when testing on Ubuntu 16.04 (our target OS),
which has
>>>> >>>>>>>> HDF5
>>>> >>>>>>>> 1.8.16. I recently tested on my personal Arch Linux machine
though,
>>>> >>>>>>>> which
>>>> >>>>>>>> has HDF5 1.10.0, and got this segmentation fault: (gdb) bt #0
>>>> >>>>>>>> 0x00007ffff67c57d9 in H5SL_search () from
/usr/lib/libhdf5.so.100 #1
>>>> >>>>>>>> 0x00007ffff678dd19 in H5P_copy_plist () from
/usr/lib/libhdf5.so.100
>>>> >>>>>>>> #2
>>>> >>>>>>>> 0x00007ffff66a7fc0 in H5F_new () from /usr/lib/libhdf5.so.100
#3
>>>> >>>>>>>> 0x00007ffff66a8f55 in H5F_open () from
/usr/lib/libhdf5.so.100 #4
>>>> >>>>>>>> 0x00007ffff66a155d in H5Fopen () from /usr/lib/libhdf5.so.100
#5
>>>> >>>>>>>> 0x00007ffff6b79546 in H5::H5File::p_get_file(char const*,
unsigned
>>>> >>>>>>>> int,
>>>> >>>>>>>> H5::FileCreatPropList const&, H5::FileAccPropList const&) ()
from
>>>> >>>>>>>> /usr/lib/libhdf5_cpp.so.100 #6 0x00007ffff6b79750 in
>>>> >>>>>>>> H5::H5File::H5File(char
>>>> >>>>>>>> const*, unsigned int, H5::FileCreatPropList const&,
>>>> >>>>>>>> H5::FileAccPropList
>>>> >>>>>>>> const&) () from /usr/lib/libhdf5_cpp.so.100 #7
0x000000000041f00e in
>>>> >>>>>>>> HDF5ImageReader::RequestInformation (this=0x7fffbc002de0,
>>>> >>>>>>>> request=0x7fffbc010da0, inputVector=0x0,
>>>> >>>>>>>> outputVector=0x7fffbc0039d0)
>>>> >>>>>>>> at
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>>
/home/estan/Projekt/orexplore/dev/src/insight/src/model/HDF5ImageReader.cpp:91
>>>> >>>>>>>> #8 0x00007fffee8200d0 in
>>>> >>>>>>>> vtkExecutive::CallAlgorithm(vtkInformation*,
>>>> >>>>>>>> int,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #9
0x00007fffee837fa9 in
>>>> >>>>>>>>
>>>> >>>>>>>>
vtkStreamingDemandDrivenPipeline::ExecuteInformation(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #10
0x00007fffee81ce05 in
>>>> >>>>>>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #11
0x00007fffee835c55 in
>>>> >>>>>>>>
vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #12
0x00007fffee816e1a in
>>>> >>>>>>>> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*) ()
from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #13
0x00007fffee81ccb5 in
>>>> >>>>>>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #14
0x00007fffee835c55 in
>>>> >>>>>>>>
vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #15
0x00007fffee816e1a in
>>>> >>>>>>>> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*) ()
from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #16
0x00007fffee81ccb5 in
>>>> >>>>>>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #17
0x00007fffee835c55 in
>>>> >>>>>>>>
vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #18
0x00007fffee816e1a in
>>>> >>>>>>>> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*) ()
from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #19
0x00007fffee81ccb5 in
>>>> >>>>>>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #20
0x00007fffee835c55 in
>>>> >>>>>>>>
vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>>> >>>>>>>> vtkInformationVector**, vtkInformationVector*) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #21
0x00007fffee836482 in
>>>> >>>>>>>> vtkStreamingDemandDrivenPipeline::Update(int) () from
>>>> >>>>>>>> /usr/lib/libvtkCommonExecutionModel.so.1 #22
0x00007ffff1289a76 in
>>>> >>>>>>>> vtkAbstractVolumeMapper::GetBounds() () from
>>>> >>>>>>>> /usr/lib/libvtkRenderingCore.so.1 #23 0x00007ffff13459f9 in
>>>> >>>>>>>> vtkVolume::GetBounds() () from
/usr/lib/libvtkRenderingCore.so.1 #24
>>>> >>>>>>>> 0x000000000043f235 in createVolume (image=..., from=0,
>>>> >>>>>>>> to=2.7803999378532183, opacityFunction=...,
colorFunction=...) at
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>>
/home/estan/Projekt/orexplore/dev/src/insight/src/view/Pipeline.cpp:123 #25
>>>> >>>>>>>> 0x00000000004295c4 in CreateVolume::operator() (this=0x829248,
>>>> >>>>>>>> image=...) at
>>>> >>>>>>>>
/home/estan/Projekt/orexplore/dev/src/insight/src/view/Pipeline.h:45
>>>> >>>>>>>> #26
>>>> >>>>>>>> 0x000000000042bc7a in
>>>> >>>>>>>> QtConcurrent::MappedEachKernel::const_iterator,
>>>> >>>>>>>> CreateVolume>::runIteration (this=0x829210, it=...,
>>>> >>>>>>>> result=0x7fffbc002da8)
>>>> >>>>>>>> at /usr/include/qt/QtConcurrent/qtconcurrentmapkernel.h:176
#27
>>>> >>>>>>>> 0x000000000042bd5d in
>>>> >>>>>>>> QtConcurrent::MappedEachKernel::const_iterator,
>>>> >>>>>>>> CreateVolume>::runIterations (this=0x829210,
>>>> >>>>>>>> sequenceBeginIterator=...,
>>>> >>>>>>>> begin=1, end=2, results=0x7fffbc002da8) at
>>>> >>>>>>>> /usr/include/qt/QtConcurrent/qtconcurrentmapkernel.h:186 #28
>>>> >>>>>>>> 0x000000000042c4e1 in
QtConcurrent::IterateKernel::const_iterator,
>>>> >>>>>>>> vtkSmartPointer >::forThreadFunction (this=0x829210) at
>>>> >>>>>>>> /usr/include/qt/QtConcurrent/qtconcurrentiteratekernel.h:256
#29
>>>> >>>>>>>> 0x000000000042bedc in
QtConcurrent::IterateKernel::const_iterator,
>>>> >>>>>>>> vtkSmartPointer >::threadFunction (this=0x829210) at
>>>> >>>>>>>> /usr/include/qt/QtConcurrent/qtconcurrentiteratekernel.h:218
#30
>>>> >>>>>>>> 0x00007ffff7bd5cfd in QtConcurrent::ThreadEngineBase::run()
() from
>>>> >>>>>>>> /usr/lib/libQt5Concurrent.so.5 #31 0x00007ffff489a01f in ??
() from
>>>> >>>>>>>> /usr/lib/libQt5Core.so.5 #32 0x00007ffff489dd78 in ?? () from
>>>> >>>>>>>> /usr/lib/libQt5Core.so.5 #33 0x00007fffeb3f5454 in
start_thread ()
>>>> >>>>>>>> from
>>>> >>>>>>>> /usr/lib/libpthread.so.0 #34 0x00007fffec5f07df in clone ()
from
>>>> >>>>>>>> /usr/lib/libc.so.6 (gdb) Before I start digging into what is
>>>> >>>>>>>> happening
>>>> >>>>>>>> here
>>>> >>>>>>>> I'd just like to ask: Do I have to do something special when
using
>>>> >>>>>>>> the
>>>> >>>>>>>> HDF5
>>>> >>>>>>>> library from two different threads? I'm not reading the same
files
>>>> >>>>>>>> or
>>>> >>>>>>>> anything, it's simply two completely separate usages of the
library
>>>> >>>>>>>> in
>>>> >>>>>>>> threads that run in parallel. Does the library have any global
>>>> >>>>>>>> structures or
>>>> >>>>>>>> something that must be initialized before spawning any
threads that
>>>> >>>>>>>> use it?
>>>> >>>>>>>> The reason I'm a little worried is that perhaps I've just
been lucky
>>>> >>>>>>>> when
>>>> >>>>>>>> running under Ubuntu / HDF5 1.8.16. My usage in each thread
>>>> >>>>>>>> basically
>>>> >>>>>>>> looks
>>>> >>>>>>>> like: 1) Create a H5::H5File 2) Open a dataset using
>>>> >>>>>>>> file.openDataset
>>>> >>>>>>>> 3) Get
>>>> >>>>>>>> the dataspace for the dataset and select a hyperslab 4)
Create a
>>>> >>>>>>>> memory
>>>> >>>>>>>> dataspace 5) Perform a single read(..) operation from the
dataset
>>>> >>>>>>>> dataspace
>>>> >>>>>>>> to the memory dataspace And it's always different files that
the
>>>> >>>>>>>> threads
>>>> >>>>>>>> work with. Is there some step 0 I'm missing? Thanks in
advance for
>>>> >>>>>>>> any
>>>> >>>>>>>> advice. Elvis _______________________________________________
>>>> >>>>>>>> Hdf-forum is
>>>> >>>>>>>> for HDF software users discussion.
[email protected]
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >>>>>>>> Twitter: https://twitter.com/hdf5
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>> _______________________________________________
>>>> >>>>>>>> Hdf-forum is for HDF software users discussion.
>>>> >>>>>>>> [email protected]
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >>>>>>>> Twitter: https://twitter.com/hdf5
>>>> >>>>>>
>>>> >>>>>> _______________________________________________
>>>> >>>>>> Hdf-forum is for HDF software users discussion.
>>>> >>>>>> [email protected]
>>>> >>>>>>
>>>> >>>>>>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >>>>>> Twitter: https://twitter.com/hdf5
>>>> >>>>>
>>>> >>>>> _______________________________________________
>>>> >>>>> Hdf-forum is for HDF software users discussion.
>>>> >>>>> [email protected]
>>>> >>>>>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >>>>> Twitter: https://twitter.com/hdf5
>>>> >>>>
>>>> >>>> _______________________________________________
>>>> >>>> Hdf-forum is for HDF software users discussion.
>>>> >>>> [email protected]
>>>> >>>>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >>>> Twitter: https://twitter.com/hdf5
>>>> >>>
>>>> >>>
>>>> >>> --
>>>> >>>
>>>> >>>
___________________________________________________________________________
>>>> >>> Dr. Werner Benger                Visualization Research
>>>> >>> Center for Computation & Technology at Louisiana State University
>>>> >>> (CCT/LSU)
>>>> >>> 2019  Digital Media Center, Baton Rouge, Louisiana 70803
>>>> >>> Tel.: +1 225 578 4809                        Fax.: +1 225 578-5362
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> _______________________________________________
>>>> >>> Hdf-forum is for HDF software users discussion.
>>>> >>> [email protected]
>>>> >>>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >>> Twitter: https://twitter.com/hdf5
>>>> >>
>>>> >> _______________________________________________
>>>> >> Hdf-forum is for HDF software users discussion.
>>>> >> [email protected]
>>>> >>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> >> Twitter: https://twitter.com/hdf5
>>>> >
>>>> >
>>>> > --
>>>> >
___________________________________________________________________________
>>>> > Dr. Werner Benger                Visualization Research
>>>> > Center for Computation & Technology at Louisiana State University
(CCT/LSU)
>>>> > 2019  Digital Media Center, Baton Rouge, Louisiana 70803
>>>> > Tel.: +1 225 578 4809                        Fax.: +1 225 578-5362
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > Hdf-forum is for HDF software users discussion.
>>>> > [email protected]
>>>> >
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>>> > Twitter: https://twitter.com/hdf5
>>>
>>>
>>
>>
>>
>> _______________________________________________ Hdf-forum is for HDF
software users discussion. [email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to