Use jps -m to check which processes you have running.

 

Check the accumulo logs – are there any with *.err that have a size > 0?  The 
.err files will be created on an unexpected exit.  The other debug logs will 
provide a clearer picture of what is happening.

 

Tail the master debug log / and a tserver debug log – are they showing 
exceptions being thrown?

 

From: Christine Buss <[email protected]> 
Sent: Saturday, July 10, 2021 9:57 AM
To: [email protected]
Subject: Aw: Re: Re: Re: Hadoop ConnectException

 

 

Ok, so I in the file 'accumulo.properties' I changed

## Sets location in HDFS where Accumulo will store data
instance.volumes=hdfs://localhost:8020/accumulo

 

to

 

## Sets location in HDFS where Accumulo will store data
instance.volumes=hdfs://localhost:9000/accumulo

 

 

Then I was able to run 'accumulo init' and 'accumulo-cluster start'.

But when I run 'accumulo shell -u root' it hangs:

 

 

christine@centauri:~/accumulo-2.0.1/bin$ 
<mailto:christine@centauri:~/accumulo-2.0.1/bin$>  ./accumulo shell -u root
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in 
version 9.0 and will likely be removed in a future release.
Loading configuration from 
/home/christine/accumulo-2.0.1/conf/accumulo-client.properties
Password: *********

Shell - Apache Accumulo Interactive Shell
-
- version: 2.0.1
- instance name: accumulotest
- instance id: 5d8c404a-c741-48b3-b7a4-adaf19cc1499
-
- type 'help' for a list of available commands
-
2021-07-10 15:39:17,328 [clientImpl.ServerClient] WARN : There are no tablet 
servers: check that zookeeper and accumulo are running.

 

 

 

 

 

Gesendet: Samstag, 10. Juli 2021 um 14:43 Uhr
Von: "Christine Buss" <[email protected] 
<mailto:[email protected]> >
An: [email protected] <mailto:[email protected]> 
Betreff: Aw: Re: Re: Re: Hadoop ConnectException

sorry found it:

The ‘accumulo-cluster’ command was created to manage Accumulo on cluster and 
replaces ‘start-all.sh’ and ‘stop-all.sh’

  

  

Gesendet: Samstag, 10. Juli 2021 um 12:14 Uhr
Von: "Christine Buss" <[email protected] 
<mailto:[email protected]> >
An: [email protected] <mailto:[email protected]> 
Betreff: Aw: Re: Re: Re: Hadoop ConnectException

I am still trying to run accumulo 2.0.1

Question: instead of ./bin/start-all.sh you use what in 2.0.1 ?

  

  

Gesendet: Freitag, 09. Juli 2021 um 17:15 Uhr
Von: "Christopher" <[email protected] <mailto:[email protected]> >
An: "accumulo-user" <[email protected] <mailto:[email protected]> 
>
Betreff: Re: Re: Re: Hadoop ConnectException

Oh, so you weren't able to get 2.0.1 working? That's unfortunate. If
you try 2.0.1 again and are able to figure out how to get past the
issue you were having, feel free to let us know what you did
differently.

On Fri, Jul 9, 2021 at 10:56 AM Christine Buss <[email protected] 
<mailto:[email protected]> > wrote:
>
>
> yes of course!
> I deleted accumulo 2.0.1 and installed accumulo 1.10.1.
> Then edited the conf/ files. I think I didn't do that right before.
> And then it worked.
>
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr
> Von: "Christopher" <[email protected] <mailto:[email protected]> >
> An: "accumulo-user" <[email protected] 
> <mailto:[email protected]> >
> Betreff: Re: Re: Hadoop ConnectException
> Glad to hear you got it working! Can you share what your solution was in case 
> it helps others?
>
> On Fri, Jul 9, 2021, 10:20 Christine Buss <[email protected] 
> <mailto:[email protected]> > wrote:
>>
>>
>> It works!! Thanks a lot to veryone!
>> I worked through all your hints and suggestions.
>>
>> Gesendet: Donnerstag, 08. Juli 2021 um 18:18 Uhr
>> Von: "Ed Coleman" <[email protected] <mailto:[email protected]> >
>> An: [email protected] <mailto:[email protected]> 
>> Betreff: Re: Hadoop ConnectException
>>
>> According to the Hadoop getting started guide 
>> (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html)
>>  the resouce manager runs at: http://localhost:8088/
>>
>> Can you run hadoop commands like:
>> > hadoop fs -ls /accumulo (or whatever you've decided on as the destination 
>> > for files)
>>
>> Did you check that accumulo-env.sh and other configuration files have been 
>> set-up for your environemnt?
>>
>>
>> On 2021/07/07 15:20:41, Christine Buss <[email protected] 
>> <mailto:[email protected]> > wrote:
>> > Hi,
>> >
>> >
>> >
>> > I am using:
>> >
>> > Java 11
>> >
>> > Ubuntu 20.04.2
>> >
>> > Hadoop 3.3.1
>> >
>> > Zookeeper 3.7.0
>> >
>> > Accumulo 2.0.1
>> >
>> >
>> >
>> >
>> >
>> > I followed the instructions here:
>> >
>> > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
>> > common/SingleCluster.html
>> >
>> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,
>> > etc/hadoop/hdfs-site.xml accordingly.
>> >
>> > 'ssh localhost' works without a passphrase.
>> >
>> >
>> >
>> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:
>> >
>> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
>> > ZooKeeper JMX enabled by default
>> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
>> > Starting zookeeper ... STARTED
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
>> > Starting namenodes on [localhost]
>> > Starting datanodes
>> > Starting secondary namenodes [centauri]
>> > centauri: Warning: Permanently added
>> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of 
>> > known
>> > hosts.
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
>> > Starting resourcemanager
>> > Starting nodemanagers
>> > christine@centauri:~$ jps
>> > 3921 Jps
>> > 2387 QuorumPeerMain
>> > 3171 SecondaryNameNode
>> > 3732 NodeManager
>> > 2955 DataNode
>> > 3599 ResourceManager
>> >
>> >
>> >
>> > BUT
>> >
>> > when running 'accumulo init' I get this Error:
>> >
>> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
>> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated 
>> > in
>> > version 9.0 and will likely be removed in a future release.
>> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
>> > configuration on classpath at
>> > /home/christine/accumulo-2.0.1/conf/accumulo.properties
>> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : 
>> > dfs.datanode.synconclose
>> > set to false in hdfs-site.xml: data loss is possible on hard system reset 
>> > or
>> > power loss
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
>> > hdfs://localhost:9000
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
>> > [hdfs://localhost:8020/accumulo]
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
>> > localhost:2181
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
>> > available. If this hangs, then you need to make sure zookeeper is running
>> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
>> > java.io.IOException: Failed to check if filesystem already initialized
>> > at 
>> > org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>> > at java.base/java.lang.Thread.run(Thread.java:829)
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
>> > localhost:8020 failed on connection exception: java.net.ConnectException:
>> > Connection refused; For more details see:
>> > http://wiki.apache.org/hadoop/ConnectionRefused
>> > at 
>> > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> >  Method)
>> > at 
>> > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > at 
>> > java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > at 
>> > java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>> > at 
>> > org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>> > at 
>> > org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>> > at 
>> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
>> > Method)
>> > at 
>> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at 
>> > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>> > at 
>> > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>> > at 
>> > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>> > at 
>> > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> > at 
>> > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>> > at 
>> > org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>> > at 
>> > org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>> > at 
>> > org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>> > ... 4 more
>> > Caused by: java.net.ConnectException: Connection refused
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > at 
>> > java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>> > at 
>> > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>> > at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>> > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>> > ... 28 more
>> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
>> > java.lang.RuntimeException: java.io.IOException: Failed to check if 
>> > filesystem
>> > already initialized
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>> > at java.base/java.lang.Thread.run(Thread.java:829)
>> > Caused by: java.io.IOException: Failed to check if filesystem already
>> > initialized
>> > at 
>> > org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>> > ... 2 more
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
>> > localhost:8020 failed on connection exception: java.net.ConnectException:
>> > Connection refused; For more details see:
>> > http://wiki.apache.org/hadoop/ConnectionRefused
>> > at 
>> > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> >  Method)
>> > at 
>> > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > at 
>> > java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > at 
>> > java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>> > at 
>> > org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>> > at 
>> > org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>> > at 
>> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
>> > Method)
>> > at 
>> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at 
>> > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>> > at 
>> > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>> > at 
>> > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>> > at 
>> > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>> > at 
>> > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> > at 
>> > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>> > at 
>> > org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>> > at 
>> > org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>> > at 
>> > org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>> > ... 4 more
>> > Caused by: java.net.ConnectException: Connection refused
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > at 
>> > java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>> > at 
>> > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>> > at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>> > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>> > ... 28 more
>> >
>> >
>> >
>> >
>> >
>> > I am not able to find the mistake. I found similar questions on 
>> > Stackoverflow,
>> > but none of them solved my problem.
>> >
>> > Thanks in advance for any idea.
>> >
>> >

Reply via email to