Github user dongjoon-hyun commented on the issue:

    https://github.com/apache/spark/pull/13520
  
    Thank you for review, @rxin and @srowen .
    
    The main rational of this PR is to make `SparkSession` explicitly as a 
starting point for the operations in these examples. (Instead of SparkContext, 
sc).
    
    `Spark` uses natually `'.'` to make a long sequence of operations, i.e, 
`sc.parallelize().map().reduce()` or 
`spark.createDataFrame().toDF().stat.crosstab().show()`. And, before 
`SparkSession`, the starting points were `SparkContext` and 
`Dataset/Dataframe/RDD`.
    
    This PR tried to treat `SparkSession` and `Dataset/Dataframe/RDD` as the 
starting points in these examples and didn't touch other examples which `sc` is 
repeated a lot.
    
    The other things like replacing `var` with `val` are irrelevant. I can 
revert them.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to