Now I have this error:

Exception in thread "main" java.net.ConnectException: Call From
telles-samza-master/10.1.0.79 to telles-samza-master:8020 failed on
connection exception: java.net.ConnectException: Connection refused; For
more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
 at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1359)
 at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
 at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:671)
 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1746)
at
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
 at
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
at
org.apache.samza.job.yarn.ClientHelper.submitApplication(ClientHelper.scala:111)
 at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:55)
at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:48)
 at org.apache.samza.job.JobRunner.run(JobRunner.scala:62)
at org.apache.samza.job.JobRunner$.main(JobRunner.scala:37)
 at org.apache.samza.job.JobRunner.main(JobRunner.scala)
Caused by: java.net.ConnectException: Connection refused
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
 at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:601)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:696)
at org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1458)
at org.apache.hadoop.ipc.Client.call(Client.java:1377)
 ... 22 more



On Tue, Aug 12, 2014 at 3:39 AM, Yan Fang <[email protected]> wrote:

> Hi Telles,
>
> I think you put the wrong port. Usually, the HDFS port is 8020, not 50070.
> You should put something like:
> *hdfs://telles**-samza-master:8020*/path/to/samza-job-package.taz.gz.
> Thanks.
>
> Fang, Yan
> [email protected]
> +1 (206) 849-4108
>
>
> On Mon, Aug 11, 2014 at 8:31 PM, Telles Nobrega <[email protected]>
> wrote:
>
> > I tried moving from HDFS to HttpFileSystem. I’m getting the
> HttpFileSystem
> > not found exception. I have done the steps in the tutorial that Chris
> > pasted below (I had done that before, but I’m not sure what is the
> > problem). Seems like since I have the compiled file in one machine
> > (resource manager) and I submit it and try to download from the node
> > managers, they don’t have samza-yarn.jar (don’t know how to include it,
> > since the run will be done in the resource manager).
> >
> > Can you give me a tip on how to solve this?
> >
> > Thanks in advance.
> >
> > ps. the folder and tar.gz of the job are located in one machine alone, is
> > that the right way to do it or do I need to replicate hello-samza in all
> > machines to run it?
> > On 11 Aug 2014, at 23:12, Telles Nobrega <[email protected]>
> wrote:
> >
> > > What is your suggestion here, should I keep going on this quest to fix
> > hdfs or should I try to run using HttpFileSystem?
> > > On 11 Aug 2014, at 23:01, Telles Nobrega <[email protected]>
> > wrote:
> > >
> > >> The port is right?? 50700. I have no idea what is happening now.
> > >>
> > >> On 11 Aug 2014, at 22:33, Telles Nobrega <[email protected]>
> > wrote:
> > >>
> > >>> Right now the error is the following:
> > >>> Exception in thread "main" java.io.IOException: Failed on local
> > exception: com.google.protobuf.InvalidProtocolBufferException: Protocol
> > message end-group tag did not match expected tag.; Host Details : local
> > host is: "telles-samza-master/10.1.0.79"; destination host is:
> > "telles-samza-master":50070;
> > >>>     at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> > >>>     at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> > >>>     at org.apache.hadoop.ipc.Client.call(Client.java:1359)
> > >>>     at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > >>>     at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> > >>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >>>     at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >>>     at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >>>     at java.lang.reflect.Method.invoke(Method.java:606)
> > >>>     at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> > >>>     at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> > >>>     at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> > >>>     at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:671)
> > >>>     at
> > org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1746)
> > >>>     at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
> > >>>     at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
> > >>>     at
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > >>>     at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
> > >>>     at
> >
> org.apache.samza.job.yarn.ClientHelper.submitApplication(ClientHelper.scala:111)
> > >>>     at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:55)
> > >>>     at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:48)
> > >>>     at org.apache.samza.job.JobRunner.run(JobRunner.scala:62)
> > >>>     at org.apache.samza.job.JobRunner$.main(JobRunner.scala:37)
> > >>>     at org.apache.samza.job.JobRunner.main(JobRunner.scala)
> > >>> Caused by: com.google.protobuf.InvalidProtocolBufferException:
> > Protocol message end-group tag did not match expected tag.
> > >>>     at
> >
> com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94)
> > >>>     at
> >
> com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124)
> > >>>     at
> >
> com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:202)
> > >>>     at
> >
> com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
> > >>>     at
> >
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
> > >>>     at
> >
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
> > >>>     at
> >
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
> > >>>     at
> >
> org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:2364)
> > >>>     at
> >
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1051)
> > >>>     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)
> > >>>
> > >>> I feel that I’m close to making it run. Thanks for the help in
> advance.
> > >>> On 11 Aug 2014, at 22:06, Telles Nobrega <[email protected]>
> > wrote:
> > >>>
> > >>>> Hi, I downloaded hadoop-common-2.3.0.jar and it worked better. Now
> > I’m having a configuration problem with my host, but it looks like the
> hdfs
> > is not a problem anymore.
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> On 11 Aug 2014, at 22:04, Telles Nobrega <[email protected]>
> > wrote:
> > >>>>
> > >>>>> So, I added hadoop-hdfs-2.3.0.jar as a maven dependency. Recompiled
> > the project, extracted to deploy/samza and there problem still happens. I
> > downloaded hadoop-client-2.3.0.jar and the problems still happens,
> > hadoop-common is 2.2.0 does this is a problem? I will try with 2.3.0
> > >>>>>
> > >>>>> Actually a lot of hadoop jars are 2.2.0
> > >>>>>
> > >>>>> On 11 Aug 2014, at 21:33, Yan Fang <[email protected]> wrote:
> > >>>>>
> > >>>>>> <include>org.apache.hadoop:hadoop-hdfs</include>
> > >>>>>
> > >>>>
> > >>>
> > >>
> > >
> >
> >
>



-- 
------------------------------------------
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG

Reply via email to