Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13751#discussion_r67655933
  
    --- Diff: docs/sparkr.md ---
    @@ -113,16 +108,15 @@ head(df)
     
     ### From Data Sources
     
    -SparkR supports operating on a variety of data sources through the 
`DataFrame` interface. This section describes the general methods for loading 
and saving data using Data Sources. You can check the Spark SQL programming 
guide for more [specific 
options](sql-programming-guide.html#manually-specifying-options) that are 
available for the built-in data sources.
    +SparkR supports operating on a variety of data sources through the 
`SparkDataFrame` interface. This section describes the general methods for 
loading and saving data using Data Sources. You can check the Spark SQL 
programming guide for more [specific 
options](sql-programming-guide.html#manually-specifying-options) that are 
available for the built-in data sources.
     
    -The general method for creating DataFrames from data sources is `read.df`. 
This method takes in the `SQLContext`, the path for the file to load and the 
type of data source. SparkR supports reading JSON, CSV and Parquet files 
natively and through [Spark Packages](http://spark-packages.org/) you can find 
data source connectors for popular file formats like 
[Avro](http://spark-packages.org/package/databricks/spark-avro). These packages 
can either be added by
    +The general method for creating SparkDataFrames from data sources is 
`read.df`. This method takes in the path for the file to load and the type of 
data source, and the currently active SparkSession will be used automatically. 
SparkR supports reading JSON, CSV and Parquet files natively and through [Spark 
Packages](http://spark-packages.org/) you can find data source connectors for 
popular file formats like 
[Avro](http://spark-packages.org/package/databricks/spark-avro). These packages 
can either be added by
     specifying `--packages` with `spark-submit` or `sparkR` commands, or if 
creating context through `init`
     you can specify the packages with the `packages` argument.
     
     <div data-lang="r" markdown="1">
     {% highlight r %}
    -sc <- sparkR.init(sparkPackages="com.databricks:spark-avro_2.11:2.0.1")
    -sqlContext <- sparkRSQL.init(sc)
    +sc <- sparkR.session(sparkPackages="com.databricks:spark-avro_2.11:2.0.1")
    --- End diff --
    
    I guess their goal is to have it when Spark 2.0.0 is released, which is 
when the published latest docs are updated too, so let's change this to `3.0.0`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to