Re: HDFS Rest Service not available

2015-06-02 Thread Su She
Ahh, this did the trick, I had to get the name node out of same mode
however before it fully worked.

Thanks!

On Tue, Jun 2, 2015 at 12:09 AM, Akhil Das  wrote:
> It says your namenode is down (connection refused on 8020), you can restart
> your HDFS by going into hadoop directory and typing sbin/stop-dfs.sh and
> then sbin/start-dfs.sh
>
> Thanks
> Best Regards
>
> On Tue, Jun 2, 2015 at 5:03 AM, Su She  wrote:
>>
>> Hello All,
>>
>> A bit scared I did something stupid...I killed a few PIDs that were
>> listening to ports 2183 (kafka), 4042 (spark app), some of the PIDs
>> didn't even seem to be stopped as they still are running when i do
>>
>> lsof -i:[port number]
>>
>> I'm not sure if the problem started after or before I did these kill
>> commands, but I now can't connect to HDFS or start spark. I can't seem
>> to access Hue. I am afraid I accidentally killed an important process
>> related to HDFS. But, I am not sure what it would be as I couldn't
>> even kill the PIDs.
>>
>> Is it a coincidence that HDFS failed randomly? Likely that I killed an
>> important PID? How can I maybe restart HDFS?
>>
>> Thanks a lot!
>>
>> Error on Hue:
>>
>> Cannot access: /user/ec2-user. The HDFS REST service is not available.
>> Note: You are a Hue admin but not a HDFS superuser (which is "hdfs").
>>
>> HTTPConnectionPool(host='ec2-ip-address.us-west-1.compute.amazonaws.com',
>> port=50070): Max retries exceeded with url:
>> /webhdfs/v1/user/ec2-user?op=GETFILESTATUS&user.name=hue&doas=ec2-user
>> (Caused by : [Errno 111] Connection refused)
>>
>> Error when I try to open spark-shell or a spark app:
>> java.net.ConnectException: Call From
>> ip-10-0-2-216.us-west-1.compute.internal/10.0.2.216 to
>> ip-10-0-2-216.us-west-1.compute.internal:8020 failed on connection
>> exception: java.net.ConnectException: Connection refused; For more
>> details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at
>> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
>> at org.apache.hadoop.ipc.Client.call(Client.java:1415)
>> at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>> at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>> at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>> at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1921)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
>> at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
>> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
>> at
>> org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:123)
>> at org.apache.spark.util.FileLogger.start(FileLogger.scala:115)
>> at
>> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74)
>> at org.apache.spark.SparkContext.(SparkContext.scala:353)
>> at
>> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:986)
>> at $iwC$$iwC.(:9)
>> at $iwC.(:18)
>> at (:20)
>> at .(:24)
>> at .()
>> at .(:7)
>> at .()
>> at $print()
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.la

Re: HDFS Rest Service not available

2015-06-02 Thread Akhil Das
It says your namenode is down (connection refused on 8020), you can restart
your HDFS by going into hadoop directory and typing sbin/stop-dfs.sh and
then sbin/start-dfs.sh

Thanks
Best Regards

On Tue, Jun 2, 2015 at 5:03 AM, Su She  wrote:

> Hello All,
>
> A bit scared I did something stupid...I killed a few PIDs that were
> listening to ports 2183 (kafka), 4042 (spark app), some of the PIDs
> didn't even seem to be stopped as they still are running when i do
>
> lsof -i:[port number]
>
> I'm not sure if the problem started after or before I did these kill
> commands, but I now can't connect to HDFS or start spark. I can't seem
> to access Hue. I am afraid I accidentally killed an important process
> related to HDFS. But, I am not sure what it would be as I couldn't
> even kill the PIDs.
>
> Is it a coincidence that HDFS failed randomly? Likely that I killed an
> important PID? How can I maybe restart HDFS?
>
> Thanks a lot!
>
> Error on Hue:
>
> Cannot access: /user/ec2-user. The HDFS REST service is not available.
> Note: You are a Hue admin but not a HDFS superuser (which is "hdfs").
>
> HTTPConnectionPool(host='ec2-ip-address.us-west-1.compute.amazonaws.com',
> port=50070): Max retries exceeded with url:
> /webhdfs/v1/user/ec2-user?op=GETFILESTATUS&user.name=hue&doas=ec2-user
> (Caused by : [Errno 111] Connection refused)
>
> Error when I try to open spark-shell or a spark app:
> java.net.ConnectException: Call From
> ip-10-0-2-216.us-west-1.compute.internal/10.0.2.216 to
> ip-10-0-2-216.us-west-1.compute.internal:8020 failed on connection
> exception: java.net.ConnectException: Connection refused; For more
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> at org.apache.hadoop.ipc.Client.call(Client.java:1415)
> at org.apache.hadoop.ipc.Client.call(Client.java:1364)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1921)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
> at
> org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:123)
> at org.apache.spark.util.FileLogger.start(FileLogger.scala:115)
> at
> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74)
> at org.apache.spark.SparkContext.(SparkContext.scala:353)
> at
> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:986)
> at $iwC$$iwC.(:9)
> at $iwC.(:18)
> at (:20)
> at .(:24)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852)
> at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125)
> at
> org.apache.spark.repl.SparkIMain.loadA

HDFS Rest Service not available

2015-06-01 Thread Su She
Hello All,

A bit scared I did something stupid...I killed a few PIDs that were
listening to ports 2183 (kafka), 4042 (spark app), some of the PIDs
didn't even seem to be stopped as they still are running when i do

lsof -i:[port number]

I'm not sure if the problem started after or before I did these kill
commands, but I now can't connect to HDFS or start spark. I can't seem
to access Hue. I am afraid I accidentally killed an important process
related to HDFS. But, I am not sure what it would be as I couldn't
even kill the PIDs.

Is it a coincidence that HDFS failed randomly? Likely that I killed an
important PID? How can I maybe restart HDFS?

Thanks a lot!

Error on Hue:

Cannot access: /user/ec2-user. The HDFS REST service is not available.
Note: You are a Hue admin but not a HDFS superuser (which is "hdfs").

HTTPConnectionPool(host='ec2-ip-address.us-west-1.compute.amazonaws.com',
port=50070): Max retries exceeded with url:
/webhdfs/v1/user/ec2-user?op=GETFILESTATUS&user.name=hue&doas=ec2-user
(Caused by : [Errno 111] Connection refused)

Error when I try to open spark-shell or a spark app:
java.net.ConnectException: Call From
ip-10-0-2-216.us-west-1.compute.internal/10.0.2.216 to
ip-10-0-2-216.us-west-1.compute.internal:8020 failed on connection
exception: java.net.ConnectException: Connection refused; For more
details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1921)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
at org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:123)
at org.apache.spark.util.FileLogger.start(FileLogger.scala:115)
at 
org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74)
at org.apache.spark.SparkContext.(SparkContext.scala:353)
at 
org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:986)
at $iwC$$iwC.(:9)
at $iwC.(:18)
at (:20)
at .(:24)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852)
at 
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125)
at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:674)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:705)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:669)
at 
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:828)
at 
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:873)
at org.apache.spark.repl.SparkILoop.command(SparkIL