Github user ConeyLiu commented on the issue:

    https://github.com/apache/spark/pull/17936
  
    Sorry for the mistake, this test result should be the cached situation:
    | ------| ------ | ------ |
    | 15.877s | 2827.373s | 178x |
    | 16.781s | 2809.502s | 167x |
    | 16.320s | 2845.699s | 174x |
    | 19.437s | 2860.387s | 147x |
    | 16.793s | 2931.667s | 174x|
    
    Test case:
    ```
    object TestNetflixlib {
      def main(args: Array[String]): Unit = {
        val conf = new SparkConf().setAppName("Test Netflix mlib")
        val sc = new SparkContext(conf)
    
        val data = sc.textFile("hdfs://10.1.2.173:9000/nf_training_set.txt")
    
        val ratings = data.map(_.split("::") match {
          case Array(user, item, rate) => Rating(user.toInt, item.toInt, 
rate.toDouble)
        })
    
        val rank = 0
        val numIterations = 10
        val train_start = System.nanoTime()
        val model = ALS.train(ratings, rank, numIterations, 0.01)
        val user = model.userFeatures
        val item = model.productFeatures
        val start = System.nanoTime()
        val rate = user.cartesian(item)
        println(rate.count())
        val time = (System.nanoTime() - start) / 1e9
        println(time)
      }
    }
    ```
    
    The RDDs (user and item) should be cached.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to