Moovlin opened a new pull request #29068:
URL: https://github.com/apache/spark/pull/29068


   This was a dead simple change that I lightly tested to determine if there
   was actually a performance increase. Turns out, yes there is (at least 
locally).
   
   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: 
https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: 
https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a 
faster review.
     7. If you want to add a new configuration, please read the guideline first 
for naming configurations in
        
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section 
is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster 
reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class 
hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other 
DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   As per the issue SPARK-27892, the saving & loading of files was being done 
sequentially in the SharedReadWrite object which would clearly make loading 
large models quite slow. The enhancement simply creates a new ParArray & 
converts the existing stage array into a parallel array by calling ".par". When 
passed into the foreach / mapper phases it is automatically parallelized. 
   
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   This only serves to speed up load times of pipelines with many stages. 
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description and/or an example to show the 
behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   No. 
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   -->
   I tested it by using the example provided in SPARK-27892. I also added a 
load step to verify that worked correctly. 
   
   `import org.apache.spark.ml._
   def time[R](block : => R): R = {
   val start = System.nanoTime()
   val result = block
   val end = System.nanoTime()
   println("elap time: " + (end - start) + "ns")
   result
   }
   import org.apache.spark.ml.feature.VectorAssembler
   val outputPath = "boopcity"
   val stages = (1 to 100) map { i => new 
VectorAssembler().setInputCols(Array("input")).setOutputCol("o" + i)}
   val p = new Pipeline().setStages(stages.toArray)
   val data = Seq(1, 1, 1) toDF "input"
   val pm = p.fit(data)
   val result = time { pm.write.overwrite().save(outputPath) }
   val sameModel = time{ PipelineModel.load(outputPath) }`
   
   I compared the execution time within Scala by running 4 runs for both the 
sequential and parallel version. 
   The raw "data"
   para:
           save: 11294731300ns
                    11211858600ns
                      7047186600ns
                      6382136700ns
   
              mean: 8983978300ns
   
   
           load:  6909773600ns
                   15572318900ns
                     5430449700ns
                     4499734100ns
   
              mean: 8103069075ns
   
   
   seq:
           save: 43879181600ns
                    27545280700ns
                    24504610300ns
                    23568721400ns
   
              mean: 
   
           load: 27151395000ns
                    20363685700ns
                    15923967900ns
                    15889010200ns
   
   
   para:
          save:    8983978300ns
          load:    8103069075ns
   
   seq:
          save:  20029586625ns
          load:  19832014700ns
   
   Decreasing times are likely contributed to the data simply being cached on 
faster portions of the SSD in my machine so the average isn't super meaningful 
but consistently we see that the parallel version is far faster than 
sequential. 
    
   Additionally, I ran the entire mllib testsuite on the changes with all test 
passing. Testing on a cluster is likely worth doing but I don't have the 
resources to do that at the moment. 
   
   There is no way to add a unit test to do this without adding an additional 
function which doesn't isn't done in parallel which seems like a waste of time 
& effort to add. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to