[ 
https://issues.apache.org/jira/browse/SPARK-32271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Jordan updated SPARK-32271:
----------------------------------
    Description: 
*What changes were proposed in this pull request?*

I have added a `method` parameter to `CrossValidator.scala` to allow the user 
to choose between repeated random sub-sampling cross-validation (current 
behavior) and _k_-fold cross-validation (optional new behavior). The default 
method is random sub-sampling cross-validation.

If _k_-fold cross-validation is chosen, the new behavior is as follows:
 # Instead of splitting the input dataset into _k_ training and validation 
sets, I split them into _k_ folds; for each fold of training, one of the _k_ 
splits is selected for validation, and the others are unioned together for 
training.
 # Instead of caching each training and validation set _k_ times, I cache each 
of the folds once.
 # Instead of waiting for every model to finish training on fold _n_ before 
moving on to fold _n+1_, new fold/model combinations will be trained as soon as 
resources are available.
 # Instead of creating one `Future` per model for each fold in series, all 
`Future`s for each fold & parameter grid pair are created and trained in 
parallel.
 # A new `Int` parameter is added to the `Future` (now `Future[Int, Double]` 
instead of `Future[Double]`) in order to keep track of which `Future` belongs 
to which parameter grid.

*Why are the changes needed?*

These changes allow the user to choose between repeated random sub-sampling 
cross-validation (current behavior) and _k_-fold cross-validation (optional new 
behavior). These changes:
 1. allow the user to choose between two types of cross-validation.
 2. (If _k_-fold is chosen) only require caching the entire dataset once 
(instead of _k_ times in repeated random sub-sampling cross-validation, as it 
does now).
 3. (if _k_-fold is chosen) free resources to train new model/fold combinations 
as soon as the previous one finishes. Currently, a model can only train one 
fold at a time. If _k_-fold is chosen, the added functionality will allow the 
`fit` to train multiple folds at once for the same model, and, in the case of a 
grid search, allow it to train multiple model/fold combinations at once, 
without needing to wait for the slowest model to fit the first fold before 
moving onto the second.

*Does this PR introduce _any_ user-facing change?*

Yes. This PR introduces the `setMethod` method to `CrossValidator`. If the 
`method` parameter is not set, the behavior will be the same as it has always 
been.

*How was this patch tested?*

Unit tests will be added.

  was:
### What changes were proposed in this pull request?

I have added a `method` parameter to `CrossValidator.scala` to allow the user 
to choose between repeated random sub-sampling cross-validation (current 
behavior) and _k_-fold cross-validation (optional new behavior). The default 
method is random sub-sampling cross-validation.

If _k_-fold cross-validation is chosen, the new behavior is as follows:

1. Instead of splitting the input dataset into _k_ training and validation 
sets, I split them into _k_ folds; for each fold of training, one of the _k_ 
splits is selected for validation, and the others are unioned together for 
training.
2. Instead of caching each training and validation set _k_ times, I cache each 
of the folds once.
3. Instead of waiting for every model to finish training on fold _n_ before 
moving on to fold _n+1_, new fold/model combinations will be trained as soon as 
resources are available.
4. Instead of creating one `Future` per model for each fold in series, all 
`Future`s for each fold & parameter grid pair are created and trained in 
parallel.
5. A new `Int` parameter is added to the `Future` (now `Future[Int, Double]` 
instead of `Future[Double]`) in order to keep track of which `Future` belongs 
to which parameter grid.

### Why are the changes needed?

These changes allow the user to choose between repeated random sub-sampling 
cross-validation (current behavior) and _k_-fold cross-validation (optional new 
behavior). These changes:
1. allow the user to choose between two types of cross-validation.
2. (If _k_-fold is chosen) only require caching the entire dataset once 
(instead of _k_ times in repeated random sub-sampling cross-validation, as it 
does now).
3. (if _k_-fold is chosen) free resources to train new model/fold combinations 
as soon as the previous one finishes. Currently, a model can only train one 
fold at a time. If _k_-fold is chosen, the added functionality will allow the 
`fit` to train multiple folds at once for the same model, and, in the case of a 
grid search, allow it to train multiple model/fold combinations at once, 
without needing to wait for the slowest model to fit the first fold before 
moving onto the second.

### Does this PR introduce _any_ user-facing change?

Yes. This PR introduces the `setMethod` method to `CrossValidator`. If the 
`method` parameter is not set, the behavior will be the same as it has always 
been.

### How was this patch tested?

Unit tests will be added.


> Update CrossValidator to parallelize fit method across folds
> ------------------------------------------------------------
>
>                 Key: SPARK-32271
>                 URL: https://issues.apache.org/jira/browse/SPARK-32271
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>    Affects Versions: 3.1.0
>            Reporter: Austin Jordan
>            Priority: Minor
>
> *What changes were proposed in this pull request?*
> I have added a `method` parameter to `CrossValidator.scala` to allow the user 
> to choose between repeated random sub-sampling cross-validation (current 
> behavior) and _k_-fold cross-validation (optional new behavior). The default 
> method is random sub-sampling cross-validation.
> If _k_-fold cross-validation is chosen, the new behavior is as follows:
>  # Instead of splitting the input dataset into _k_ training and validation 
> sets, I split them into _k_ folds; for each fold of training, one of the _k_ 
> splits is selected for validation, and the others are unioned together for 
> training.
>  # Instead of caching each training and validation set _k_ times, I cache 
> each of the folds once.
>  # Instead of waiting for every model to finish training on fold _n_ before 
> moving on to fold _n+1_, new fold/model combinations will be trained as soon 
> as resources are available.
>  # Instead of creating one `Future` per model for each fold in series, all 
> `Future`s for each fold & parameter grid pair are created and trained in 
> parallel.
>  # A new `Int` parameter is added to the `Future` (now `Future[Int, Double]` 
> instead of `Future[Double]`) in order to keep track of which `Future` belongs 
> to which parameter grid.
> *Why are the changes needed?*
> These changes allow the user to choose between repeated random sub-sampling 
> cross-validation (current behavior) and _k_-fold cross-validation (optional 
> new behavior). These changes:
>  1. allow the user to choose between two types of cross-validation.
>  2. (If _k_-fold is chosen) only require caching the entire dataset once 
> (instead of _k_ times in repeated random sub-sampling cross-validation, as it 
> does now).
>  3. (if _k_-fold is chosen) free resources to train new model/fold 
> combinations as soon as the previous one finishes. Currently, a model can 
> only train one fold at a time. If _k_-fold is chosen, the added functionality 
> will allow the `fit` to train multiple folds at once for the same model, and, 
> in the case of a grid search, allow it to train multiple model/fold 
> combinations at once, without needing to wait for the slowest model to fit 
> the first fold before moving onto the second.
> *Does this PR introduce _any_ user-facing change?*
> Yes. This PR introduces the `setMethod` method to `CrossValidator`. If the 
> `method` parameter is not set, the behavior will be the same as it has always 
> been.
> *How was this patch tested?*
> Unit tests will be added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to