Hi, Yes, that sounds good. I will start working on UI aspects while waiting for them to release fixes. If they release fixes while working on the UI it should be fine. Otherwise, I will start implementing the autoencoder.
On Thu, Jun 18, 2015 at 12:04 PM, Srinath Perera <[email protected]> wrote: > Let's work on other parts of the project, for example we can look at UI to > configure Deep learning while waiting for them to do it. However, fix takes > time, we have to write auto encoder. > > On Thu, Jun 18, 2015 at 7:26 AM, Thushan Ganegedara <[email protected]> > wrote: > >> Dear all, >> >> I posted about poor accuracy on deeplearning4j google group and they >> replied saying they are aware of this issue ( >> https://groups.google.com/forum/#!topic/deeplearning4j/gRQH8rA9SQI). >> They are currently working on it, to get the tuned examples out soon. >> >> However, from the appearance of their response, deeplearning4j has few >> bugs/issues with gradient calculation after they moved to nd4j. >> >> I can see two options right now, >> >> 1. Wait till the bugs are fixed and tuned examples are out >> 2. Write autoencoder myself, adhering to the coding style they have >> followed. So the example can be easily integrated with deeplearning4j after >> a stable version is out. >> >> >> Would highly appreciate your feedback. >> >> On Wed, Jun 17, 2015 at 10:27 AM, Thushan Ganegedara <[email protected]> >> wrote: >> >>> Hi, >>> >>> I couldn't find any reports on high accuracy on any set of data. The >>> popular datasets seem to be Iris dataset and MNIST dataset. However, their >>> example page on git hub shows several results for different techniques and >>> datasets (https://github.com/deeplearning4j/dl4j-0.0.3.3-examples). And >>> as it can be seen, the accuracies mentioned are quite low. (However, the >>> developers claim that this is because they have run the examples with less >>> number of nodes and less iterations) >>> >>> >>> >>> On Tue, Jun 16, 2015 at 6:03 PM, Srinath Perera <[email protected]> >>> wrote: >>> >>>> Are there any other datasets where dl4j suppose to do well? As long as >>>> it does better with *some* datasets, we can go ahead with those? >>>> >>>> On Tue, Jun 16, 2015 at 9:13 AM, Thushan Ganegedara <[email protected]> >>>> wrote: >>>> >>>>> Yes, there are few use list (Git hub and google group). I will inquire >>>>> about this in user lists. >>>>> >>>>> Thank you >>>>> >>>>> >>>>> On Tue, Jun 16, 2015 at 12:34 PM, Nirmal Fernando <[email protected]> >>>>> wrote: >>>>> >>>>>> Thanks Thushan for the update. >>>>>> >>>>>> In addition to you digging into the code, can you also inquire on the >>>>>> poor performance from the DL4J user list (if there's one exist)? >>>>>> >>>>>> On Tue, Jun 16, 2015 at 5:27 AM, Thushan Ganegedara <[email protected] >>>>>> > wrote: >>>>>> >>>>>>> Dear all, >>>>>>> >>>>>>> Please find the update regarding DL4J testing >>>>>>> >>>>>>> *Poor Accuracy* >>>>>>> I have been testing DL4J extensively with *MNIST and Iris* datasets >>>>>>> (Small and Full). However, I was unable to get a reasonable accuracy >>>>>>> with >>>>>>> DL4J for the aforementioned datasets. The F1-score was around 0.02, >>>>>>> which >>>>>>> is very low. >>>>>>> >>>>>>> I tried with different settings mainly for the following attributes >>>>>>> >>>>>>> Weight initialization >>>>>>> Gradient Descent >>>>>>> Iterations >>>>>>> Type of units: Autoencoder/RBM >>>>>>> >>>>>>> >>>>>>> But none of the settings gave a reasonable accuracy. Furthermore, >>>>>>> the predicted values for the test data usually *belong to 1 or 2 >>>>>>> classes *(e.g. when trained on MNIST dataset, the program predict 0 >>>>>>> and 1 only, though there are 10 possible classes) >>>>>>> >>>>>>> Also there are many reports of *poor accuracy of DL4J.* The best >>>>>>> accuracy I could find reported was around 0.5 F1 score for MNIST, which >>>>>>> is >>>>>>> still very low. (e.g. MNIST can easily reach 0.9+ accuracy for even a >>>>>>> basic deep network) >>>>>>> >>>>>>> I'm currently trying to delve in to the code for DL4J and figure out >>>>>>> how the learning is done. I'm assuming there are some faults in the >>>>>>> learning process which causes the algorithm to learn poorly. >>>>>>> >>>>>>> Thank you >>>>>>> >>>>>>> -- >>>>>>> Regards, >>>>>>> >>>>>>> Thushan Ganegedara >>>>>>> School of IT >>>>>>> University of Sydney, Australia >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> Thanks & regards, >>>>>> Nirmal >>>>>> >>>>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc. >>>>>> Mobile: +94715779733 >>>>>> Blog: http://nirmalfdo.blogspot.com/ >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> Thushan Ganegedara >>>>> School of IT >>>>> University of Sydney, Australia >>>>> >>>> >>>> >>>> >>>> -- >>>> ============================ >>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera >>>> Site: http://people.apache.org/~hemapani/ >>>> Photos: http://www.flickr.com/photos/hemapani/ >>>> Phone: 0772360902 >>>> >>> >>> >>> >>> -- >>> Regards, >>> >>> Thushan Ganegedara >>> School of IT >>> University of Sydney, Australia >>> >> >> >> >> -- >> Regards, >> >> Thushan Ganegedara >> School of IT >> University of Sydney, Australia >> > > > > -- > ============================ > Blog: http://srinathsview.blogspot.com twitter:@srinath_perera > Site: http://people.apache.org/~hemapani/ > Photos: http://www.flickr.com/photos/hemapani/ > Phone: 0772360902 > -- Regards, Thushan Ganegedara School of IT University of Sydney, Australia
_______________________________________________ Dev mailing list [email protected] http://wso2.org/cgi-bin/mailman/listinfo/dev
