Hi Trevor,

great to hear that you have a working prototype :-) And it is also good
that you shared your insights you gained when implementing it. Flink’s ML
library is far from perfect and, thus, all kinds of feedback is highly
valuable. In general it is always good to contribute code back if you think
they make a valuable addition. I try to give some comments to your points
as inline comments.

On Mon, Mar 28, 2016 at 8:01 PM, Trevor Grant <trevor.d.gr...@gmail.com>
wrote:

Hey,
>
> I have a working prototype of an multi layer perceptron implementation
> working in Flink.
>
> I made every possible effort to utilize existing code when possible.
>
> In the process of doing this there were some hacks I want/need, and think
> this should be broken up into multiple PRs and possible abstract out the
> whole thing because the MLP implementation I came up with is itself
> designed to be extendable to Long Short Term Memory Networks.


> Top level here are some of the sub PRs
>
> - Expand SGD to allow for predicting vectors instead of just Doubles. This
> allows the same NN code (and other algos) to be used for classification,
> transformations, and regressions.
>
I agree that we could extend the LabeledVector to support a Vector[Double]
as label instead of a single Double. Initially we implemented it with a
single label value for the sake of simplicity. But I remember that we also
had a discussion about it. But somehow we didn’t derive any action points
from that. If you have code for that, then feel free to open a PR.


> - Allow for 'warm starts' -> this requires adding a parameter to
> IterativeSolver that basically starts on iteration N.  This is somewhat
> akin to the idea of partial fits in sklearn OR making the iterative solver
> have some sort of internal counter and then when you call 'fit' it just
> runs another N iterations (which is set by SetIterations) instead of
> assuming it is back to zero.  This might seem trivial but has significant
> impact on step size calculations.
>
That is a good point and should not be too hard to add, I would assume.


> - A library of model grading metrics. Having 'calculate RSquare' as a built
> in method for every regressor doesn't seem like an efficient way to do this
> long term.
>
Agreed. The squaredResidualSum method of MLR is just a convenience method
to get at least one metric for the accuracy back. There is a PR open by
Theo which adds an evaluation framework to flinkML [1]. If I’m not mistaken
it should add some more generalized mean to calculate grading metrics.


> -BLAS for matrix ops (this was talked about earlier)
>
Here the recommended way is to convert your matrix to a BreezeMatrix and
then use the BLAS operation from there.


> - A neural net has Arrays of matrices of weights (instead of just a
> vector).  Currently I flatten the array of matrices out into a weight
> vector and reassemble it into an array of matrices, though this is probably
> not super effecient.
>
I would assume that you should simply operate on the flattened vector
without converting from one representation to the other.


> - The linear regression implementation currently presumes it will be using
> SGD but I think that should be 'settable' as a parameter, because if not-
> why do we have all of those other nice SGD methods just hanging out?
> Similarly the loss function / partial loss is hard coded.  I reccomend
> making the current setup the 'defaults' of a 'setOptimizer' method.  I.e.
> if you want to just run a MLR you can do it based on the examples, but if
> you want to use a fancy optimizer you can create it from existing methods,
> or make your own, then call something like `mlr.setOptimizer( myOptimizer
> )`
>
> Agreed. That was actually also our plan but we haven’t come to that so
far. If you have the code available, then please open a PR for it.

- and more
>
> At any rate- if some people could weigh in / direct me how to proceed that
> would be swell.
>
> Thanks!
> tg
>
>
>
>
> Trevor Grant
> Data Scientist
> https://github.com/rawkintrevo
> http://stackexchange.com/users/3002022/rawkintrevo
> http://trevorgrant.org
>
> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>
[1] https://github.com/apache/flink/pull/871

Cheers,
Till
​

Reply via email to