This thread could be related:
http://search-hadoop.com/m/JW1q592kqi&subj=Re+spark+shell+working+in+scala+2+11+breaking+change+

On Mon, Feb 23, 2015 at 7:08 PM, Silvio Fiorito <
silvio.fior...@granturing.com> wrote:

>  Looks like your Spark config may be trying to log to an HDFS path. Can
> you review your config settings?
>
>  *From:* bit1...@163.com
> *Sent:* ‎Monday‎, ‎February‎ ‎23‎, ‎2015 ‎9‎:‎54‎ ‎PM
> *To:* yuzhihong <yuzhih...@gmail.com>
> *Cc:* user@spark.apache.org
>
>  [hadoop@hadoop bin]$ sh submit.log.streaming.kafka.complicated.sh
> Spark assembly has been built with Hive, including Datanucleus jars on
> classpath
> Start to run MyKafkaWordCount
> Exception in thread "main" java.net.ConnectException: Call From
> hadoop.master/192.168.26.137 to hadoop.master:9000 failed on connection
> exception: java.net.ConnectException: Connection refused; For more details
> see: http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> at org.apache.hadoop.ipc.Client.call(Client.java:1414)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
> at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
>
> at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
>
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
>
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
> at org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:123)
> at org.apache.spark.util.FileLogger.start(FileLogger.scala:115)
> at
> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74)
>
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:353)
> at
> org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:571)
>
> at
> org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:74)
>
> at
> spark.examples.streaming.MyKafkaWordCount$.main(MyKafkaWordCount.scala:14)
> at spark.examples.streaming.MyKafkaWordCount.main(MyKafkaWordCount.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
> at org.apache.hadoop.ipc.Client.call(Client.java:1381)
> ... 32 more
>
>
>  ------------------------------
>  bit1...@163.com
>
>
>  *From:* Ted Yu <yuzhih...@gmail.com>
> *Date:* 2015-02-24 10:24
> *To:* bit1...@163.com
> *CC:* user <user@spark.apache.org>
> *Subject:* Re: Does Spark Streaming depend on Hadoop?
>   Can you pastebin the whole stack trace ?
>
>  Thanks
>
>
>
> On Feb 23, 2015, at 6:14 PM, "bit1...@163.com" <bit1...@163.com> wrote:
>
>   Hi,
>
>  When I submit a spark streaming application with following script,
>
>  ./spark-submit --name MyKafkaWordCount --master local[20]
> --executor-memory 512M --total-executor-cores 2 --class
> spark.examples.streaming.MyKafkaWordCount  my.kakfa.wordcountjar
>
>  An exception occurs:
> Exception in thread "main" java.net.ConnectException: Call From
> hadoop.master/192.168.26.137 to hadoop.master:9000 failed on connection
> exception.
>
>  From the exception, it tries to connect to 9000 which is for
> Hadoop/HDFS. and I don't use Hadoop at all in my code(such as save to HDFS).
>
>
>
>  ------------------------------
>  bit1...@163.com
>
>

Reply via email to