[GitHub] [spark] zhengruifeng commented on a change in pull request #25926: [SPARK-9612][ML] Add instance weight support for GBTs
zhengruifeng commented on a change in pull request #25926: [SPARK-9612][ML] Add instance weight support for GBTs URL: https://github.com/apache/spark/pull/25926#discussion_r332326534 ## File path: mllib/src/main/scala/org/apache/spark/ml/tree/impl/GradientBoostedTrees.scala ## @@ -197,41 +215,41 @@ private[spark] object GradientBoostedTrees extends Logging { * containing the first i+1 trees */ def evaluateEachIteration( - data: RDD[LabeledPoint], + data: RDD[Instance], trees: Array[DecisionTreeRegressionModel], treeWeights: Array[Double], loss: OldLoss, algo: OldAlgo.Value): Array[Double] = { val sc = data.sparkContext val remappedData = algo match { - case OldAlgo.Classification => data.map(x => new LabeledPoint((x.label * 2) - 1, x.features)) + case OldAlgo.Classification => +data.map(x => Instance((x.label * 2) - 1, x.weight, x.features)) case _ => data } val broadcastTrees = sc.broadcast(trees) val localTreeWeights = treeWeights -val treesIndices = trees.indices - -val dataCount = remappedData.count() -val evaluation = remappedData.map { point => - treesIndices.map { idx => -val prediction = broadcastTrees.value(idx) - .rootNode - .predictImpl(point.features) - .prediction -prediction * localTreeWeights(idx) +val numTrees = trees.length + +val (errSum, weightSum) = remappedData.mapPartitions { iter => Review comment: in this place, I just followed preivous impl. I am neutral on it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] zhengruifeng commented on a change in pull request #25926: [SPARK-9612][ML] Add instance weight support for GBTs
zhengruifeng commented on a change in pull request #25926: [SPARK-9612][ML] Add instance weight support for GBTs URL: https://github.com/apache/spark/pull/25926#discussion_r332320495 ## File path: mllib/src/main/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.scala ## @@ -68,7 +68,7 @@ class GradientBoostedTrees private[spark] ( def run(input: RDD[LabeledPoint]): GradientBoostedTreesModel = { val algo = boostingStrategy.treeStrategy.algo val (trees, treeWeights) = NewGBT.run(input.map { point => - NewLabeledPoint(point.label, point.features.asML) + NewLabeledPoint(point.label, point.features.asML).toInstance Review comment: Yes, it is better to directly create `Instance` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] zhengruifeng commented on a change in pull request #25926: [SPARK-9612][ML] Add instance weight support for GBTs
zhengruifeng commented on a change in pull request #25926: [SPARK-9612][ML] Add instance weight support for GBTs URL: https://github.com/apache/spark/pull/25926#discussion_r328916510 ## File path: mllib/src/test/scala/org/apache/spark/ml/regression/GBTRegressorSuite.scala ## @@ -296,6 +304,35 @@ class GBTRegressorSuite extends MLTest with DefaultReadWriteTest { } } + test("training with sample weights") { +val df = linearRegressionData +val numClasses = 0 +// (maxIter, maxDepth) +val testParams = Seq( + (5, 5), + (5, 10) +) + +for ((maxIter, maxDepth) <- testParams) { + val estimator = new GBTRegressor() +.setMaxIter(maxIter) +.setMaxDepth(maxDepth) +.setSeed(seed) +.setMinWeightFractionPerNode(0.1) + + MLTestingUtils.testArbitrarilyScaledWeights[GBTRegressionModel, +GBTRegressor](df.as[LabeledPoint], estimator, +MLTestingUtils.modelPredictionEquals(df, _ ~= _ relTol 0.1, 0.95)) Review comment: Compared to `DecisionTreeRegressorSuite`, I need to limit the number of trees and loose the tolerance eps(0.99 -> 0.95) to pass the cases. I wonder if it is due to accumulated errors among trees. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org