Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/3262#issuecomment-63163240
  
    Ok I finally went through the code. I like the change and it is pretty 
clever. I believe it should preserve both source compatibility and binary 
compatibility.
    
    To summarize, the changes are:
    
    1. Deprecated the old implicit conversion functions: this preserves binary 
compatibility for code compiled against earlier versions of Spark.
    2. Removed "implicit" from them so they are just normal functions: this 
made sure the compiler doesn't get confused and warn about multiple implicits 
in scope.
    3. Created new implicit functions in package rdd object, which is part of 
the scope that scalac will search when looking for implicit conversions on 
various RDD objects.
    
    It is still a tricky change so it'd be great to get more eyes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to