Felix Cheung created SPARK-19460:
------------------------------------

             Summary: Update dataset used in R documentation, examples to 
reduce warning noise and confusions
                 Key: SPARK-19460
                 URL: https://issues.apache.org/jira/browse/SPARK-19460
             Project: Spark
          Issue Type: Bug
          Components: SparkR
    Affects Versions: 2.1.0
            Reporter: Felix Cheung


Running build we have a bunch of warnings from using the `iris` dataset, for 
example.

Warning in FUN(X[[1L]], ...) :
Use Sepal_Length instead of Sepal.Length as column name
Warning in FUN(X[[2L]], ...) :
Use Sepal_Width instead of Sepal.Width as column name
Warning in FUN(X[[3L]], ...) :
Use Petal_Length instead of Petal.Length as column name
Warning in FUN(X[[4L]], ...) :
Use Petal_Width instead of Petal.Width as column name
Warning in FUN(X[[1L]], ...) :
Use Sepal_Length instead of Sepal.Length as column name
Warning in FUN(X[[2L]], ...) :
Use Sepal_Width instead of Sepal.Width as column name
Warning in FUN(X[[3L]], ...) :
Use Petal_Length instead of Petal.Length as column name
Warning in FUN(X[[4L]], ...) :
Use Petal_Width instead of Petal.Width as column name
Warning in FUN(X[[1L]], ...) :
Use Sepal_Length instead of Sepal.Length as column name
Warning in FUN(X[[2L]], ...) :
Use Sepal_Width instead of Sepal.Width as column name
Warning in FUN(X[[3L]], ...) :
Use Petal_Length instead of Petal.Length as column name

These are the results of having `.` in the column name. For reference, see 
SPARK-12191, SPARK-11976. Since it involves changing SQL, if we couldn't 
support that there then we should strongly consider using other dataset without 
`.`, eg. `cars`

And we should update this in API doc (roxygen2 doc string), vignettes, 
programming guide, R code example.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to