[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user yanboliang closed the pull request at: https://github.com/apache/spark/pull/16214 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user yanboliang commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91881854 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,68 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to install third-party R packages to executors +# in your SparkR jobs distributed by "spark.lapply". +# +# Note: This example will install packages to a temporary directory on your machine. +# The directory will be removed automatically when the example exit. +# You environment should be connected to internet to run this example, +# otherwise, you should change "repos" to your private repository url. +# And the environment need to have necessary tools such as gcc to compile +# and install R package "e1071". +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# $example on$ +# The directory where the third-party R packages are installed. +libDir <- paste0(tempdir(), "/", "Rlib") +dir.create(libDir) + +# Downloaded e1071 package source code to a directory +packagesDir <- paste0(tempdir(), "/", "packages") +dir.create(packagesDir) +download.packages("e1071", packagesDir, repos = "https://cran.r-project.org";) +filename <- list.files(packagesDir, "^e1071") +packagesPath <- file.path(packagesDir, filename) +# Add the third-party R package to be downloaded with this Spark job on every node. +spark.addFile(packagesPath) + +path <- spark.getSparkFiles(filename) +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(lib = libDir)) == FALSE) { +install.packages(path, repos = NULL, type = "source") --- End diff -- Yeah, we have the package content, but it's source package rather than binary package, so we can not use ```library``` to load the package. This is the pain point for this example. If we illustrate this example with binary package, we should provide scripts for different os version, and it require all nodes in users' cluster should have the same architecture. So I use source package, I think it's a more universal example. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user yanboliang commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91881006 --- Diff: docs/sparkr.md --- @@ -472,21 +472,17 @@ should fit in a single machine. If that is not the case they can do something li `dapply` -{% highlight r %} -# Perform distributed training of multiple models with spark.lapply. Here, we pass -# a read-only list of arguments which specifies family the generalized linear model should be. --- End diff -- Sounds good, updated. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91765571 --- Diff: docs/sparkr.md --- @@ -472,21 +472,17 @@ should fit in a single machine. If that is not the case they can do something li `dapply` -{% highlight r %} -# Perform distributed training of multiple models with spark.lapply. Here, we pass -# a read-only list of arguments which specifies family the generalized linear model should be. -families <- c("gaussian", "poisson") -train <- function(family) { - model <- glm(Sepal.Length ~ Sepal.Width + Species, iris, family = family) - summary(model) -} -# Return a list of model's summaries -model.summaries <- spark.lapply(families, train) +{% include_example lapply r/ml/ml.R %} + -# Print the summary of each model -print(model.summaries) +# spark.lapply with third-party packages -{% endhighlight %} +Many of the SparkR jobs distributed by `spark.lapply` need supports from third-party packages. Rather than installing all necessary packages to all executors in advance, +we could install them during the SparkR interactive session or script. Users can add package files or directories by `spark.addFile` firstly, +download them to every executor node, and install them. --- End diff -- this kind of sounds like the user will need to separately "download them to executor node" - perhaps instead say "by `spark.addFile` first, which automatically download them to every executor node, and then install them`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91766006 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,68 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to install third-party R packages to executors +# in your SparkR jobs distributed by "spark.lapply". +# +# Note: This example will install packages to a temporary directory on your machine. +# The directory will be removed automatically when the example exit. +# You environment should be connected to internet to run this example, +# otherwise, you should change "repos" to your private repository url. +# And the environment need to have necessary tools such as gcc to compile +# and install R package "e1071". +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# $example on$ +# The directory where the third-party R packages are installed. +libDir <- paste0(tempdir(), "/", "Rlib") +dir.create(libDir) + +# Downloaded e1071 package source code to a directory +packagesDir <- paste0(tempdir(), "/", "packages") +dir.create(packagesDir) +download.packages("e1071", packagesDir, repos = "https://cran.r-project.org";) +filename <- list.files(packagesDir, "^e1071") +packagesPath <- file.path(packagesDir, filename) +# Add the third-party R package to be downloaded with this Spark job on every node. +spark.addFile(packagesPath) + +path <- spark.getSparkFiles(filename) +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(lib = libDir)) == FALSE) { +install.packages(path, repos = NULL, type = "source") --- End diff -- re: comment [here](https://github.com/apache/spark/pull/16214#discussion_r91562425) if you already have the package content from sparkFiles you do not need to call `install.packages()`, which I think would be better without it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91765125 --- Diff: docs/sparkr.md --- @@ -472,21 +472,17 @@ should fit in a single machine. If that is not the case they can do something li `dapply` -{% highlight r %} -# Perform distributed training of multiple models with spark.lapply. Here, we pass -# a read-only list of arguments which specifies family the generalized linear model should be. -families <- c("gaussian", "poisson") -train <- function(family) { - model <- glm(Sepal.Length ~ Sepal.Width + Species, iris, family = family) - summary(model) -} -# Return a list of model's summaries --- End diff -- ditto, but minor here --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user yanboliang commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91679010 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,68 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to install third-party R packages to executors +# in your SparkR jobs distributed by "spark.lapply". +# +# Note: This example will install packages to a temporary directory on your machine. +# The directory will be removed automatically when the example exit. +# You environment should be connected to internet to run this example, +# otherwise, you should change "repos" to your private repository url. +# And the environment need to have necessary tools such as gcc to compile +# and install R package "e1071". +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# $example on$ +# The directory where the third-party R packages are installed. +libDir <- paste0(tempdir(), "/", "Rlib") +dir.create(libDir) + +# Downloaded e1071 package source code to a directory +packagesDir <- paste0(tempdir(), "/", "packages") +dir.create(packagesDir) +download.packages("e1071", packagesDir, repos = "https://cran.r-project.org";) +filename <- list.files(packagesDir, "^e1071") +packagesPath <- file.path(packagesDir, filename) +# Add the third-party R package to be downloaded with this Spark job on every node. +spark.addFile(packagesPath) + +path <- spark.getSparkFiles(filename) +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(lib = libDir)) == FALSE) { +install.packages(path, repos = NULL, type = "source") +} +library(e1071) --- End diff -- Yeah, I run it in standalone mode several times, all work well. But I think I can not guarantee it works well always before more careful test, may be I'm lucky and not hit the concurrent issue. I'll figure out a religious test to verify it later. Thanks. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user yanboliang commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91667478 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,80 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to use third-party R packages in your task +# which is distributed by Spark. We support two scenarios: +# - Install packages from CRAN to executors directly. +# - Install packages from local file system to executors. +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# Get the location of the default library +libDir <- .libPaths()[1] --- End diff -- Good suggestion! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91562472 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,80 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to use third-party R packages in your task +# which is distributed by Spark. We support two scenarios: +# - Install packages from CRAN to executors directly. +# - Install packages from local file system to executors. +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# Get the location of the default library +libDir <- .libPaths()[1] + +# Install third-party R packages from CRAN to executors directly if it does not exist, +# then the packages can be used by the corresponding task. + +# Perform distributed training of multiple models with spark.lapply +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { +install.packages("e1071", repos = "https://cran.r-project.org";) +} +library(e1071) +model <- svm(Species ~ ., data = iris, cost = cost) +summary(model) +} +model.summaries <- spark.lapply(costs, train) + +# Print the summary of each model +print(model.summaries) + +# Install third-party R packages from local file system to executors if it does not exist, +# then the packages can be used by the corresponding task. + +# Downloaded e1071 package source code to a directory +packagesDir <- paste0(tempdir(), "/", "packages") +dir.create(packagesDir) +download.packages("e1071", packagesDir, repos = "https://cran.r-project.org";) +filename <- list.files(packagesDir, "^e1071") +packagesPath <- file.path(packagesDir, filename) +# Add the third-party R package to be downloaded with this Spark job on every node. +spark.addFile(packagesPath) + +path <- spark.getSparkFiles(filename) +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { +install.packages(path, repos=NULL, type="source") --- End diff -- https://stat.ethz.ch/R-manual/R-devel/library/base/html/library.html --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91562425 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,80 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to use third-party R packages in your task +# which is distributed by Spark. We support two scenarios: +# - Install packages from CRAN to executors directly. +# - Install packages from local file system to executors. +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# Get the location of the default library +libDir <- .libPaths()[1] + +# Install third-party R packages from CRAN to executors directly if it does not exist, +# then the packages can be used by the corresponding task. + +# Perform distributed training of multiple models with spark.lapply +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { +install.packages("e1071", repos = "https://cran.r-project.org";) +} +library(e1071) +model <- svm(Species ~ ., data = iris, cost = cost) +summary(model) +} +model.summaries <- spark.lapply(costs, train) + +# Print the summary of each model +print(model.summaries) + +# Install third-party R packages from local file system to executors if it does not exist, +# then the packages can be used by the corresponding task. + +# Downloaded e1071 package source code to a directory +packagesDir <- paste0(tempdir(), "/", "packages") +dir.create(packagesDir) +download.packages("e1071", packagesDir, repos = "https://cran.r-project.org";) +filename <- list.files(packagesDir, "^e1071") +packagesPath <- file.path(packagesDir, filename) +# Add the third-party R package to be downloaded with this Spark job on every node. +spark.addFile(packagesPath) + +path <- spark.getSparkFiles(filename) +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { +install.packages(path, repos=NULL, type="source") --- End diff -- although if this is an example of how R package can be distributed, I wouldn't call `install.packages` here, because of secure location and duplications. instead, this could do `library(e1071, lib.loc = path)` - ie. the package doesn't need to be "installed" to be loaded. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91561992 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,80 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to use third-party R packages in your task +# which is distributed by Spark. We support two scenarios: +# - Install packages from CRAN to executors directly. +# - Install packages from local file system to executors. +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# Get the location of the default library +libDir <- .libPaths()[1] + +# Install third-party R packages from CRAN to executors directly if it does not exist, +# then the packages can be used by the corresponding task. + +# Perform distributed training of multiple models with spark.lapply +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { +install.packages("e1071", repos = "https://cran.r-project.org";) +} +library(e1071) +model <- svm(Species ~ ., data = iris, cost = cost) +summary(model) +} +model.summaries <- spark.lapply(costs, train) + +# Print the summary of each model +print(model.summaries) + +# Install third-party R packages from local file system to executors if it does not exist, +# then the packages can be used by the corresponding task. + +# Downloaded e1071 package source code to a directory +packagesDir <- paste0(tempdir(), "/", "packages") +dir.create(packagesDir) +download.packages("e1071", packagesDir, repos = "https://cran.r-project.org";) +filename <- list.files(packagesDir, "^e1071") +packagesPath <- file.path(packagesDir, filename) +# Add the third-party R package to be downloaded with this Spark job on every node. +spark.addFile(packagesPath) + +path <- spark.getSparkFiles(filename) +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { --- End diff -- ditto --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16214: [SPARK-18325][SPARKR] Add example for using nativ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/16214#discussion_r91561800 --- Diff: examples/src/main/r/native-r-package.R --- @@ -0,0 +1,80 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example illustrates how to use third-party R packages in your task +# which is distributed by Spark. We support two scenarios: +# - Install packages from CRAN to executors directly. +# - Install packages from local file system to executors. +# +# To run this example use +# ./bin/spark-submit examples/src/main/r/native-r-package.R + +# Load SparkR library into your R session +library(SparkR) + +# Initialize SparkSession +sparkR.session(appName = "SparkR-native-r-package-example") + +# Get the location of the default library +libDir <- .libPaths()[1] + +# Install third-party R packages from CRAN to executors directly if it does not exist, +# then the packages can be used by the corresponding task. + +# Perform distributed training of multiple models with spark.lapply +costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) +train <- function(cost) { +if("e1071" %in% rownames(installed.packages(libDir)) == FALSE) { --- End diff -- I'd prefer `installed.packages(lib = libDir)` to be more clear --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org