[ https://issues.apache.org/jira/browse/SPARK-12625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15081702#comment-15081702 ]
Reynold Xin commented on SPARK-12625: ------------------------------------- cc [~shivaram] [~felixcheung] Can one of you fix the above? I might just disable the R test first in order to get that in. > SparkR is using deprecated APIs > ------------------------------- > > Key: SPARK-12625 > URL: https://issues.apache.org/jira/browse/SPARK-12625 > Project: Spark > Issue Type: Bug > Components: SparkR, SQL > Reporter: Reynold Xin > Priority: Blocker > > See the test failure for the following pull request: > https://github.com/apache/spark/pull/10559 > https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48642/consoleFull > SparkR is using a lot of deprecated APIs. > {code} > ======================================================================== > Running SparkR tests > ======================================================================== > Loading required package: methods > Attaching package: 'SparkR' > The following object is masked from 'package:testthat': > describe > The following objects are masked from 'package:stats': > cov, filter, lag, na.omit, predict, sd, var > The following objects are masked from 'package:base': > colnames, colnames<-, intersect, rank, rbind, sample, subset, > summary, table, transform > SerDe functionality : ................... > functions on binary files : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > .... > binary functions : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > ........... > broadcast variables : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > .. > functions in client.R : ..... > test functions in sparkR.R : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > ........Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > .......... > include an external JAR in SparkContext : .. > include R packages : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > MLlib functions : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > ............ > parallelize() and collect() : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > ............................. > basic RDD functions : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > ........................................................................................................................................................................................................................................................................................................................................................................................................................................... > partitionBy, groupByKey, reduceByKey etc. : Re-using existing Spark Context. > Please stop SparkR with sparkR.stop() or restart R to create a new Spark > Context > .................... > SparkSQL functions : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > .......................................................1...................................................................................2.3....4......................................................................................................................5............6.....................................................................................................................................7............................................................................8........................................................................ > tests RDD function take() : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > ................ > the textFile() function : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > ............. > functions in utils.R : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > ....................... > 1. Error: create DataFrame from RDD > -------------------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: insertInto(df, "people") at test_sparkSQL.R:174 > 5: insertInto(df, "people") > 6: .local(x, tableName, ...) > 7: callJMethod(x@sdf, "insertInto", tableName, overwrite) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > 2. Error: read/write json files > ------------------------------------------------ > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: write.df(df, jsonPath2, "json", mode = "overwrite") at test_sparkSQL.R:404 > 5: write.df(df, jsonPath2, "json", mode = "overwrite") > 6: .local(df, path, ...) > 7: callJMethod(df@sdf, "save", source, jmode, options) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > 3. Error: jsonRDD() on a RDD with json string > ---------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: jsonRDD(sqlContext, rdd) at test_sparkSQL.R:426 > 5: callJMethod(sqlContext, "jsonRDD", callJMethod(getJRDD(rdd), "rdd"), > samplingRatio) > 6: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 7: stop(readString(conn)) > 4. Error: insertInto() on a registered table > ----------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: write.df(df, parquetPath, "parquet", "overwrite") at test_sparkSQL.R:465 > 5: write.df(df, parquetPath, "parquet", "overwrite") > 6: .local(df, path, ...) > 7: callJMethod(df@sdf, "save", source, jmode, options) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > 5. Error: subsetting > ----------------------------------------------------------- > error in evaluating the argument 'i' in selecting a method for function '[': > Calls: %in% -> %in% -> callJMethod -> invokeJava > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: df[df$age %in% c(19, 30), 1:2] at test_sparkSQL.R:844 > 6. Error: test HiveContext > ----------------------------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: saveAsTable(df, "json2", "json", "append", path = jsonPath2) at > test_sparkSQL.R:905 > 5: saveAsTable(df, "json2", "json", "append", path = jsonPath2) > 6: callJMethod(df@sdf, "saveAsTable", tableName, source, jmode, options) > 7: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 8: stop(readString(conn)) > 7. Error: filter() on a DataFrame > ---------------------------------------------- > error in evaluating the argument 'condition' in selecting a method for > function 'where': > Calls: %in% -> %in% -> callJMethod -> invokeJava > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: where(df, df$age %in% c(19)) at test_sparkSQL.R:1260 > 8. Error: read/write Parquet files > --------------------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: write.df(df, parquetPath, "parquet", mode = "overwrite") at > test_sparkSQL.R:1472 > 5: write.df(df, parquetPath, "parquet", mode = "overwrite") > 6: .local(df, path, ...) > 7: callJMethod(df@sdf, "save", source, jmode, options) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > Error: Test failures > Execution halted > Loading required package: methods > Attaching package: 'SparkR' > The following object is masked from 'package:testthat': > describe > The following objects are masked from 'package:stats': > cov, filter, lag, na.omit, predict, sd, var > The following objects are masked from 'package:base': > colnames, colnames<-, intersect, rank, rbind, sample, subset, > summary, table, transform > SerDe functionality : ................... > functions on binary files : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > .... > binary functions : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > ........... > broadcast variables : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > .. > functions in client.R : ..... > test functions in sparkR.R : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > ........Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > .......... > include an external JAR in SparkContext : .. > include R packages : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > MLlib functions : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > ............ > parallelize() and collect() : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > ............................. > basic RDD functions : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > ........................................................................................................................................................................................................................................................................................................................................................................................................................................... > partitionBy, groupByKey, reduceByKey etc. : Re-using existing Spark Context. > Please stop SparkR with sparkR.stop() or restart R to create a new Spark > Context > .................... > SparkSQL functions : Re-using existing Spark Context. Please stop SparkR with > sparkR.stop() or restart R to create a new Spark Context > .......................................................1...................................................................................2.3....4......................................................................................................................5............6.....................................................................................................................................7............................................................................8........................................................................ > tests RDD function take() : Re-using existing Spark Context. Please stop > SparkR with sparkR.stop() or restart R to create a new Spark Context > ................ > the textFile() function : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > ............. > functions in utils.R : Re-using existing Spark Context. Please stop SparkR > with sparkR.stop() or restart R to create a new Spark Context > ....................... > 1. Error: create DataFrame from RDD > -------------------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: insertInto(df, "people") at test_sparkSQL.R:174 > 5: insertInto(df, "people") > 6: .local(x, tableName, ...) > 7: callJMethod(x@sdf, "insertInto", tableName, overwrite) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > 2. Error: read/write json files > ------------------------------------------------ > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: write.df(df, jsonPath2, "json", mode = "overwrite") at test_sparkSQL.R:404 > 5: write.df(df, jsonPath2, "json", mode = "overwrite") > 6: .local(df, path, ...) > 7: callJMethod(df@sdf, "save", source, jmode, options) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > 3. Error: jsonRDD() on a RDD with json string > ---------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: jsonRDD(sqlContext, rdd) at test_sparkSQL.R:426 > 5: callJMethod(sqlContext, "jsonRDD", callJMethod(getJRDD(rdd), "rdd"), > samplingRatio) > 6: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 7: stop(readString(conn)) > 4. Error: insertInto() on a registered table > ----------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: write.df(df, parquetPath, "parquet", "overwrite") at test_sparkSQL.R:465 > 5: write.df(df, parquetPath, "parquet", "overwrite") > 6: .local(df, path, ...) > 7: callJMethod(df@sdf, "save", source, jmode, options) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > 5. Error: subsetting > ----------------------------------------------------------- > error in evaluating the argument 'i' in selecting a method for function '[': > Calls: %in% -> %in% -> callJMethod -> invokeJava > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: df[df$age %in% c(19, 30), 1:2] at test_sparkSQL.R:844 > 6. Error: test HiveContext > ----------------------------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: saveAsTable(df, "json2", "json", "append", path = jsonPath2) at > test_sparkSQL.R:905 > 5: saveAsTable(df, "json2", "json", "append", path = jsonPath2) > 6: callJMethod(df@sdf, "saveAsTable", tableName, source, jmode, options) > 7: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 8: stop(readString(conn)) > 7. Error: filter() on a DataFrame > ---------------------------------------------- > error in evaluating the argument 'condition' in selecting a method for > function 'where': > Calls: %in% -> %in% -> callJMethod -> invokeJava > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: where(df, df$age %in% c(19)) at test_sparkSQL.R:1260 > 8. Error: read/write Parquet files > --------------------------------------------- > 1: withCallingHandlers(eval(code, new_test_environment), error = > capture_calls, message = function(c) invokeRestart("muffleMessage")) > 2: eval(code, new_test_environment) > 3: eval(expr, envir, enclos) > 4: write.df(df, parquetPath, "parquet", mode = "overwrite") at > test_sparkSQL.R:1472 > 5: write.df(df, parquetPath, "parquet", mode = "overwrite") > 6: .local(df, path, ...) > 7: callJMethod(df@sdf, "save", source, jmode, options) > 8: invokeJava(isStatic = FALSE, objId$id, methodName, ...) > 9: stop(readString(conn)) > Error: Test failures > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org