Dear all,

Please find the update regarding DL4J testing

*Poor Accuracy*
I have been testing DL4J extensively with *MNIST and Iris* datasets (Small
and Full). However, I was unable to get a reasonable accuracy with DL4J for
the aforementioned datasets. The F1-score was around 0.02, which is very
low.

I tried with different settings mainly for the following attributes

Weight initialization
Gradient Descent
Iterations
Type of units: Autoencoder/RBM


But none of the settings gave a reasonable accuracy. Furthermore, the
predicted values for the test data usually *belong to 1 or 2 classes *(e.g.
when trained on MNIST dataset, the program predict 0 and 1 only, though
there are 10 possible classes)

​Also there are many reports of *poor accuracy of DL4J.* The best accuracy
I could find reported was around 0.5 F1 score for MNIST, which is still
very​ low. (e.g. MNIST can easily reach 0.9+ accuracy for even a basic deep
network)

I'm currently trying to delve in to the code for DL4J and figure out how
the learning is done. I'm assuming there are some faults in the learning
process which causes the algorithm to learn poorly.

Thank you

-- 
Regards,

Thushan Ganegedara
School of IT
University of Sydney, Australia
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to