[ 
https://issues.apache.org/jira/browse/SPARK-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14591782#comment-14591782
 ] 

Arun commented on SPARK-8409:
-----------------------------

I think the reason behind the above error is that the csv package is not 
downloaded or installed. I tried to install the pack separately using the 
following code. I there any other method so i can install the pack.
 
>.bin\sparkR --pack
ages com.databricks:spark-csv_2.10:1.0.3

R version 3.2.0 (2015-04-16) -- "Full of Ingredients"
Copyright (C) 2015 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

Warning: namespace 'SparkR' is not available and has been replaced
by .GlobalEnv when processing object 'df'
[Previously saved workspace restored]

Launching java with spark-submit command E:\setup\spark-1.4.0-bin-hadoop2.6\spar
k-1.4.0-bin-hadoop2.6\bin\../bin/spark-submit.cmd  "--packages" "com.databricks:
spark-csv_2.10:1.0.3" "sparkr-shell"  C:\Users\RAJESH~1.KOD\AppData\Local\Temp\6
\RtmpgTFIOz\backend_port987858e35a
Ivy Default Cache set to: C:\Users\rajesh.kodam-v\.ivy2\cache
The jars for the packages stored in: C:\Users\rajesh.kodam-v\.ivy2\jars
:: loading settings :: url = jar:file:/E:/setup/spark-1.4.0-bin-hadoop2.6/spark-
1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar!/org/apache/ivy/cor
e/settings/ivysettings.xml
com.databricks#spark-csv_2.10 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
        confs: [default]
:: resolution report :: resolve 96999ms :: artifacts dl 0ms
        :: modules in use:
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   1   |   0   |   0   |   0   ||   0   |   0   |
        ---------------------------------------------------------------------

:: problems summary ::
:::: WARNINGS
                module not found: com.databricks#spark-csv_2.10;1.0.3

        ==== local-m2-cache: tried

          file:/C:/Users/rajesh.kodam-v/.m2/repository/com/databricks/spark-csv_
2.10/1.0.3/spark-csv_2.10-1.0.3.pom

          -- artifact com.databricks#spark-csv_2.10;1.0.3!spark-csv_2.10.jar:

          file:/C:/Users/rajesh.kodam-v/.m2/repository/com/databricks/spark-csv_
2.10/1.0.3/spark-csv_2.10-1.0.3.jar

        ==== local-ivy-cache: tried

          -- artifact com.databricks#spark-csv_2.10;1.0.3!spark-csv_2.10.jar:

          file:/C:/Users/rajesh.kodam-v/.ivy2/local/com.databricks\spark-csv_2.1
0\1.0.3\jars\spark-csv_2.10.jar

        ==== central: tried

          https://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.0.3/spa
rk-csv_2.10-1.0.3.pom

          -- artifact com.databricks#spark-csv_2.10;1.0.3!spark-csv_2.10.jar:

          https://repo1.maven.org/maven2/com/databricks/spark-csv_2.10/1.0.3/spa
rk-csv_2.10-1.0.3.jar

        ==== spark-packages: tried

          http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.
10/1.0.3/spark-csv_2.10-1.0.3.pom

          -- artifact com.databricks#spark-csv_2.10;1.0.3!spark-csv_2.10.jar:

          http://dl.bintray.com/spark-packages/maven/com/databricks/spark-csv_2.
10/1.0.3/spark-csv_2.10-1.0.3.jar

                ::::::::::::::::::::::::::::::::::::::::::::::

                ::          UNRESOLVED DEPENDENCIES         ::

                ::::::::::::::::::::::::::::::::::::::::::::::

                :: com.databricks#spark-csv_2.10;1.0.3: not found

                ::::::::::::::::::::::::::::::::::::::::::::::


:::: ERRORS
        Server access error at url https://repo1.maven.org/maven2/com/databricks
/spark-csv_2.10/1.0.3/spark-csv_2.10-1.0.3.pom (java.net.ConnectException: Conne
ction timed out: connect)

        Server access error at url https://repo1.maven.org/maven2/com/databricks
/spark-csv_2.10/1.0.3/spark-csv_2.10-1.0.3.jar (java.net.ConnectException: Conne
ction timed out: connect)

        Server access error at url http://dl.bintray.com/spark-packages/maven/co
m/databricks/spark-csv_2.10/1.0.3/spark-csv_2.10-1.0.3.pom (java.net.ConnectExce
ption: Connection timed out: connect)

        Server access error at url http://dl.bintray.com/spark-packages/maven/co
m/databricks/spark-csv_2.10/1.0.3/spark-csv_2.10-1.0.3.jar (java.net.ConnectExce
ption: Connection timed out: connect)


:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: c
om.databricks#spark-csv_2.10;1.0.3: not found]
        at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(Spa
rkSubmit.scala:978)
        at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSu
bmit.scala:262)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:144)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/06/18 18:47:23 INFO Utils: Shutdown hook called
Error in SparkR::sparkR.init(Sys.getenv("MASTER", unset = "")) :
  JVM is not ready after 10 seconds


>  In windows cant able to read .csv or .json files using read.df()
> -----------------------------------------------------------------
>
>                 Key: SPARK-8409
>                 URL: https://issues.apache.org/jira/browse/SPARK-8409
>             Project: Spark
>          Issue Type: Bug
>          Components: Build
>    Affects Versions: 1.4.0
>         Environment: sparkR API
>            Reporter: Arun
>            Priority: Critical
>              Labels: build
>
> Hi, 
> In SparkR shell, I invoke: 
> > mydf<-read.df(sqlContext, "/home/esten/ami/usaf.json", source="json", 
> > header="false") 
> I have tried various filetypes (csv, txt), all fail.   
>  in sparkR of spark 1.4 for eg.) df_1<- read.df(sqlContext, 
> "E:/setup/spark-1.4.0-bin-hadoop2.6/spark-1.4.0-bin-hadoop2.6/examples/src/main/resources/nycflights13.csv",
>  source = "csv")
> RESPONSE: "ERROR RBackendHandler: load on 1 failed" 
> BELOW THE WHOLE RESPONSE: 
> 15/06/16 08:09:13 INFO MemoryStore: ensureFreeSpace(177600) called with 
> curMem=0, maxMem=278302556 
> 15/06/16 08:09:13 INFO MemoryStore: Block broadcast_0 stored as values in 
> memory (estimated size 173.4 KB, free 265.2 MB) 
> 15/06/16 08:09:13 INFO MemoryStore: ensureFreeSpace(16545) called with 
> curMem=177600, maxMem=278302556 
> 15/06/16 08:09:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes 
> in memory (estimated size 16.2 KB, free 265.2 MB) 
> 15/06/16 08:09:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory 
> on localhost:37142 (size: 16.2 KB, free: 265.4 MB) 
> 15/06/16 08:09:13 INFO SparkContext: Created broadcast 0 from load at 
> NativeMethodAccessorImpl.java:-2 
> 15/06/16 08:09:16 WARN DomainSocketFactory: The short-circuit local reads 
> feature cannot be used because libhadoop cannot be loaded. 
> 15/06/16 08:09:17 ERROR RBackendHandler: load on 1 failed 
> java.lang.reflect.InvocationTargetException 
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
>         at java.lang.reflect.Method.invoke(Method.java:606) 
>         at 
> org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:127)
>  
>         at 
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:74) 
>         at 
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:36) 
>         at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  
>         at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  
>         at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
>  
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  
>         at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>  
>         at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>  
>         at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
>         at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  
>         at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  
>         at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  
>         at java.lang.Thread.run(Thread.java:745) 
> Caused by: org.apache.hadoop.mapred.InvalidInputException: Input path does 
> not exist: hdfs://smalldata13.hdp:8020/home/esten/ami/usaf.json 
>         at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
>  
>         at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) 
>         at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) 
>         at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) 
>         at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
>         at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
>         at scala.Option.getOrElse(Option.scala:120) 
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
>  
>         at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
>         at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
>         at scala.Option.getOrElse(Option.scala:120) 
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
>  
>         at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) 
>         at 
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) 
>         at scala.Option.getOrElse(Option.scala:120) 
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) 
>         at 
> org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1069) 
>         at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
>  
>         at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)
>  
>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:286) 
>         at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1067) 
>         at org.apache.spark.sql.json.InferSchema$.apply(InferSchema.scala:58) 
>         at 
> org.apache.spark.sql.json.JSONRelation$$anonfun$schema$1.apply(JSONRelation.scala:139)
>  
>         at 
> org.apache.spark.sql.json.JSONRelation$$anonfun$schema$1.apply(JSONRelation.scala:138)
>  
>         at scala.Option.getOrElse(Option.scala:120) 
>         at 
> org.apache.spark.sql.json.JSONRelation.schema$lzycompute(JSONRelation.scala:137)
>  
>         at 
> org.apache.spark.sql.json.JSONRelation.schema(JSONRelation.scala:137) 
>         at 
> org.apache.spark.sql.sources.LogicalRelation.<init>(LogicalRelation.scala:30) 
>         at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:120) 
>         at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1230) 
>         ... 25 more 
> Error: returnStatus == 0 is not TRUE
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to