[ 
https://issues.apache.org/jira/browse/SPARK-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394201#comment-14394201
 ] 

Sean Owen commented on SPARK-6665:
----------------------------------

k-fold cross validation will use all of the data once, and the MLUtils.kFold 
utility already computes the k folds all at once. You can train / score each of 
the k in parallel too.

Yes this seems to be about creating files with certain properties to be 
consumed externally, which is a secondary use case for Spark core but not 
invalid. Random permutation is a generic-ish operation. I hesitate on this just 
because you can achieve this any time you like in a couple lines of code 
without adding even a little extra weight to the core API.

> Randomly Shuffle an RDD 
> ------------------------
>
>                 Key: SPARK-6665
>                 URL: https://issues.apache.org/jira/browse/SPARK-6665
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Shell
>            Reporter: Florian Verhein
>            Priority: Minor
>
> *Use case* 
> RDD created in a way that has some ordering, but you need to shuffle it 
> because the ordering would cause problems downstream. E.g.
> - will be used to train a ML algorithm that makes stochastic assumptions 
> (like SGD) 
> - used as input for cross validation. e.g. after the shuffle, you could just 
> grab partitions (or part files if saved to hdfs) as folds
> Related question in mailing list: 
> http://apache-spark-user-list.1001560.n3.nabble.com/random-shuffle-streaming-RDDs-td17965.html
> *Possible implementation*
> As mentioned by [~sowen] in the above thread, could sort by( a good  hash of( 
> the element (or key if it's paired) and a random salt)). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to