Hi Tim,

Thanks! I'm actually looking forward to see a version update of your great 
HDF5.jl. And BTW: I have been thinking about data randomization. How 
inefficient do you think it will be if I read hdf5_dset[:,:,:,i] for i be 
100 random numbers within the index range, comparing to reading 
hdf5_dset[:,:,:,k+1:k+100], reading 100 consecutive examples (no 
randomization here) all at one time? Is there a recommended / better way of 
doing random access in HDF5 (HDF5.jl)? Thank you very much!

Best,
Chiyuan 

On Friday, November 28, 2014 2:51:29 PM UTC-5, Tim Holy wrote:
>
> Cool stuff! 
>
> --Tim 
>
> On Friday, November 28, 2014 07:42:47 AM Chiyuan Zhang wrote: 
> > Hi all, 
> > 
> > Mocha.jl <https://github.com/pluskid/Mocha.jl> is a Deep Learning 
> framework 
> > for Julia <http://julialang.org/>, inspired by the C++ Deep Learning 
> > framework Caffe <http://caffe.berkeleyvision.org/>. 
> > 
> > Please checkout the new IJulia Notebook demo of using pre-trained CNN on 
> > imagenet to do image classification: 
> > 
> http://nbviewer.ipython.org/github/pluskid/Mocha.jl/blob/master/examples/iju 
> > lia/ilsvrc12/imagenet-classifier.ipynb 
> > 
> > Here are detailed change log since the last release: 
> > 
> > v0.0.3 2014.11.27 
> > 
> >    - Interface 
> >       - IJulia-notebook example 
> >       - Image classifier wrapper 
> >    - Network 
> >       - Data transformers for data layers 
> >       - Argmax, Crop, Reshape, HDF5 Output, Weighted Softmax-loss Layers 
> >    - Infrastructure 
> >       - Unit tests are extended to cover all layers in both Float32 and 
> >       Float64 
> >       - Compatibility with Julia v0.3.3 and v0.4 nightly build 
> >    - Documentation 
> >       - Complete User's Guide 
> >       - Tutorial on image classification with pre-trained imagenet model 
> > 
> > 
> > Best, 
> > pluskid 
>
>

Reply via email to