[ https://issues.apache.org/jira/browse/FLINK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14526695#comment-14526695 ]
ASF GitHub Bot commented on FLINK-1807: --------------------------------------- Github user thvasilo commented on a diff in the pull request: https://github.com/apache/flink/pull/613#discussion_r29590147 --- Diff: flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/RegularizationType.scala --- @@ -0,0 +1,143 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.ml.optimization + +import org.apache.flink.api.scala._ +import org.apache.flink.ml.math.{Vector => FlinkVector, BLAS} +import org.apache.flink.ml.math.Breeze._ + +import breeze.numerics._ +import breeze.linalg.max + + + +// TODO(tvas): Change name to RegularizationPenalty? +/** Represents a type of regularization penalty + * + */ +abstract class RegularizationType extends Serializable{ + + /** Updates the weights by taking a step according to the gradient and regularization applied + * + * @param oldWeights The weights to be updated + * @param gradient The gradient according to which we will update the weights + * @param effectiveStepSize The effective step size for this iteration + * @param regParameter The regularization parameter to be applied in the case of L1 + * regularization + */ + def takeStep( + oldWeights: FlinkVector, + gradient: FlinkVector, + effectiveStepSize: Double, + regParameter: Double) { + BLAS.axpy(-effectiveStepSize, gradient, oldWeights) + } + +} + +/** A regularization penalty that is differentiable + * + */ +abstract class DiffRegularizationType extends RegularizationType { + + /** Compute the regularized gradient loss for the given data. + * The provided cumGradient is updated in place. + * + * @param weightVector The current weight vector + * @param lossGradient The vector to which the gradient will be added to, in place. + * @return The regularized loss. The gradient is updated in place. + */ + def regularizedLossAndGradient( + loss: Double, + weightVector: FlinkVector, + lossGradient: FlinkVector, + regularizationParameter: Double) : Double ={ + val adjustedLoss = regLoss(loss, weightVector, regularizationParameter) + regGradient(weightVector, lossGradient, regularizationParameter) + + adjustedLoss + } + + /** Calculates the regularized loss **/ + def regLoss(oldLoss: Double, weightVector: FlinkVector, regularizationParameter: Double): Double + + /** Calculates the regularized gradient **/ --- End diff -- True, will fix this. > Stochastic gradient descent optimizer for ML library > ---------------------------------------------------- > > Key: FLINK-1807 > URL: https://issues.apache.org/jira/browse/FLINK-1807 > Project: Flink > Issue Type: Improvement > Components: Machine Learning Library > Reporter: Till Rohrmann > Assignee: Theodore Vasiloudis > Labels: ML > > Stochastic gradient descent (SGD) is a widely used optimization technique in > different ML algorithms. Thus, it would be helpful to provide a generalized > SGD implementation which can be instantiated with the respective gradient > computation. Such a building block would make the development of future > algorithms easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)