Glad to hear you got it working! Can you share what your solution was in
case it helps others?

On Fri, Jul 9, 2021, 10:20 Christine Buss <[email protected]> wrote:

>
> It works!! Thanks a lot to veryone!
> I worked through all your hints and suggestions.
>
> *Gesendet:* Donnerstag, 08. Juli 2021 um 18:18 Uhr
> *Von:* "Ed Coleman" <[email protected]>
> *An:* [email protected]
> *Betreff:* Re: Hadoop ConnectException
>
> According to the Hadoop getting started guide (
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html)
> the resouce manager runs at: http://localhost:8088/
>
> Can you run hadoop commands like:
> > hadoop fs -ls /accumulo (or whatever you've decided on as the
> destination for files)
>
> Did you check that accumulo-env.sh and other configuration files have been
> set-up for your environemnt?
>
>
> On 2021/07/07 15:20:41, Christine Buss <[email protected]> wrote:
> > Hi,
> >
> >
> >
> > I am using:
> >
> > Java 11
> >
> > Ubuntu 20.04.2
> >
> > Hadoop 3.3.1
> >
> > Zookeeper 3.7.0
> >
> > Accumulo 2.0.1
> >
> >
> >
> >
> >
> > I followed the instructions here:
> >
> > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
> > common/SingleCluster.html
> >
> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,
> > etc/hadoop/hdfs-site.xml accordingly.
> >
> > 'ssh localhost' works without a passphrase.
> >
> >
> >
> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:
> >
> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
> > ZooKeeper JMX enabled by default
> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
> > Starting zookeeper ... STARTED
> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
> > Starting namenodes on [localhost]
> > Starting datanodes
> > Starting secondary namenodes [centauri]
> > centauri: Warning: Permanently added
> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
> known
> > hosts.
> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
> > Starting resourcemanager
> > Starting nodemanagers
> > christine@centauri:~$ jps
> > 3921 Jps
> > 2387 QuorumPeerMain
> > 3171 SecondaryNameNode
> > 3732 NodeManager
> > 2955 DataNode
> > 3599 ResourceManager
> >
> >
> >
> > BUT
> >
> > when running 'accumulo init' I get this Error:
> >
> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was
> deprecated in
> > version 9.0 and will likely be removed in a future release.
> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
> > configuration on classpath at
> > /home/christine/accumulo-2.0.1/conf/accumulo.properties
> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
> dfs.datanode.synconclose
> > set to false in hdfs-site.xml: data loss is possible on hard system
> reset or
> > power loss
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
> > hdfs://localhost:9000
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
> > [hdfs://localhost:8020/accumulo]
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
> > localhost:2181
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
> > available. If this hangs, then you need to make sure zookeeper is running
> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
> > java.io.IOException: Failed to check if filesystem already initialized
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
> > at
> org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> > at java.base/java.lang.Thread.run(Thread.java:829)
> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
> to
> > localhost:8020 failed on connection exception: java.net.ConnectException:
> > Connection refused; For more details see:
> > http://wiki.apache.org/hadoop/ConnectionRefused
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > at
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> > at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
> > at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
> > at
> org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
> > at
> org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
> > ... 4 more
> > Caused by: java.net.ConnectException: Connection refused
> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > at
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
> > at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
> > ... 28 more
> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
> > java.lang.RuntimeException: java.io.IOException: Failed to check if
> filesystem
> > already initialized
> > at
> org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> > at java.base/java.lang.Thread.run(Thread.java:829)
> > Caused by: java.io.IOException: Failed to check if filesystem already
> > initialized
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
> > at
> org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
> > ... 2 more
> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
> to
> > localhost:8020 failed on connection exception: java.net.ConnectException:
> > Connection refused; For more details see:
> > http://wiki.apache.org/hadoop/ConnectionRefused
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > at
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> > at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
> > at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
> > at
> org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
> > at
> org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
> > ... 4 more
> > Caused by: java.net.ConnectException: Connection refused
> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > at
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
> > at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
> > ... 28 more
> >
> >
> >
> >
> >
> > I am not able to find the mistake. I found similar questions on
> Stackoverflow,
> > but none of them solved my problem.
> >
> > Thanks in advance for any idea.
> >
> >
>

Reply via email to