My server is configured a static IPV4 address, not IPV6, I don't understand why the IPV6 address alarm is raised, the master/localhost is banded an IPV4 address:127.0.0.1
-----Original Message----- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Wednesday, November 16, 2016 11:13 AM To: user@hbase.apache.org Subject: Re: problem in launching HBase 2016-10-31 15:49:57,503 INFO [master/localhost/127.0.0.1:16000-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1: 2181. Will not attempt to authenticate using SASL (unknown error) Is your machine running IPv6 ? I don't have much experience with IPv6. Cheers On Tue, Nov 15, 2016 at 6:59 PM, QI Congyun <congyun...@alcatel-sbell.com.cn > wrote: > Hi, Ted, > > Do you feel what I make some incorrect configuration lead to my > encountering issues? > Thanks. > > > -----Original Message----- > From: QI Congyun > Sent: Tuesday, November 15, 2016 1:29 PM > To: user@hbase.apache.org > Subject: RE: problem in launching HBase > > > I'm so sorry that I make a mistake. The Hadoop configuration files are > attached in the previous e-mail. > > The hbase-site.xml are attached, pls check it. > > > > -----Original Message----- > From: Ted Yu [mailto:yuzhih...@gmail.com] > Sent: Tuesday, November 15, 2016 1:25 PM > To: user@hbase.apache.org > Subject: Re: problem in launching HBase > > I don't see hbase-site.xml attached. > > Consider using pastebin. > > On Mon, Nov 14, 2016 at 9:19 PM, QI Congyun <congyun...@alcatel-sbell.com. > cn > > wrote: > > > > > The name node and data node are running normally, such as the > > following process. The file "hbase-site.xml" and other associated > > files > are enclosed. > > Thanks. > > > > ------------------------------------------------------------ > > ------------------------------------------------------------------- > > [hadoop@hadoop2 conf]$ jps > > 11805 SecondaryNameNode > > 32314 Jps > > 11614 DataNode > > 507 NodeManager > > 385 ResourceManager > > 11379 NameNode > > ------------------------------------------------------------ > > -------------------------------------------------------------------- > > -- > > -- > > [hadoop@hadoop2 hadoop-2.7.3]$ bin/hdfs dfsadmin -report Configured > > Capacity: 154684043264 (144.06 GB) Present Capacity: 133174730752 > > (124.03 GB) DFS Remaining: 128144982016 (119.34 GB) DFS Used: > > 5029748736 (4.68 GB) DFS Used%: 3.78% Under replicated blocks: 0 > > Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks > > (with replication factor 1): 0 > > > > ------------------------------------------------- > > > > Live datanodes (1): > > > > Name: 127.0.0.1:9866 (localhost) > > Hostname: localhost > > Decommission Status : Normal > > Configured Capacity: 154684043264 (144.06 GB) DFS Used: 5029748736 > > (4.68 GB) Non DFS Used: 21509312512 (20.03 GB) DFS Remaining: > > 128144982016 (119.34 GB) DFS Used%: 3.25% DFS Remaining%: 82.84% > > Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache > > Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% > > Xceivers: 1 > > Last contact: Tue Nov 15 13:17:01 CST 2016 > > ..................................................... > > ............................................................ > > .................................. > > > > > > > > -----Original Message----- > > From: Ted Yu [mailto:yuzhih...@gmail.com] > > Sent: Tuesday, November 15, 2016 11:50 AM > > To: user@hbase.apache.org > > Subject: Re: problem in launching HBase > > > > 2016-10-31 15:49:57,528 FATAL [localhost:16000.activeMasterManager] > > master.HMaster: Failed to become active master > > java.net.ConnectException: Call From hadoop2/127.0.0.1 to > > localhost:8020 failed on connection exception: > > java.net.ConnectException: Connection refused; For more details see: > > http://wiki.apache.org/hadoop/ConnectionRefused > > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance( > > NativeConstructorAccessorImpl.java:57) > > ... > > at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) > > at > > org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode( > > DistributedFileSystem.java:986) > > at > > org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode( > > DistributedFileSystem.java:970) > > at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) > > at > > org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971 > > ) > > > > Was the namenode running fine on localhost ? > > > > Can you pastebin the contents of hbase-site.xml ? > > > > On Mon, Nov 14, 2016 at 7:40 PM, QI Congyun < > congyun...@alcatel-sbell.com. > > cn > > > wrote: > > > > > Dear Ted, > > > > > > I had learn the HBase quick-start cookbook although I have not yet > > > read all of the document, I have known how to configure the HBase > > > primary parameters and basic operation. > > > > > > I had ever tried to both make HBase control to use zookeeper and > > > cancel HBase to start zookeeper server via set up the export > > > HBASE_MANAGES_ZK=true/false in the file--hbase-env.sh. Whatever > > > zookeeper is launched by HBase automatically or started manually, > > > the same problems and logs are encountered as follows you submitted. > > > I don't understand why zookeeper authentication SASL failed. > > > > > > Actually when do the command "start-hbase.sh", the master process > > > was opened at the beginning, afterwards it's closed by itself, > > > meanwhile the zookeeper quorum process is always running until > > > kill it > manually. > > > I had do the command "JPS" to observe the process. > > > > > > Thanks. > > > > > > > > > -----Original Message----- > > > From: Ted Yu [mailto:yuzhih...@gmail.com] > > > Sent: Tuesday, November 15, 2016 11:01 AM > > > To: user@hbase.apache.org > > > Subject: problem in launching HBase > > > > > > 2016-11-10 11:25:14,177 INFO [main-SendThread(localhost:2181)] > > > zookeeper.ClientCnxn: Opening socket connection to server localhost/ > > > *127.0.0.1*:2181. Will not attempt to authenticate using SASL > > > (unknown error) > > > > > > Was the zookeeper quorum running on the localhost ? > > > > > > In the future, use pastebin for passing config / log files - > > > attachment would be stripped by mailing list. > > > > > > Have you read this ? > > > > > > http://hbase.apache.org/book.html#quickstart_fully_distributed > > > > > > On Mon, Nov 14, 2016 at 6:26 PM, QI Congyun < > > congyun...@alcatel-sbell.com. > > > cn > > > > wrote: > > > > > > > > > > > My previous e-mail is attached, pls check if the relative traces > > > > are enough to investigate or not? > > > > My node configuration are also enclosed. > > > > > > > > Thanks a lot. > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > From: QI Congyun <congyun...@alcatel-sbell.com.cn> > > > > To: "user-i...@hbase.apache.org" <user-i...@hbase.apache.org> > > > > Cc: > > > > Date: Mon, 14 Nov 2016 08:20:11 +0000 > > > > Subject: my questions about launching HBase > > > > > > > > Hi, Specialist, > > > > > > > > > > > > > > > > I try to set up a HBase database, but the HBase is always raised > > > > some errors. I had ever sent the e-mail to one of hbase > > > > mail-list, the e-mail is refused some times, then I try to > > > > submit my questions to the new mail-box, hope to receive your response. > > > > > > > > Thanks a lot. > > > > > > > > > > > > > > > > my hadoop version is Hadoop-2.7.3, > > > > > > > > my OS is: CEOS linux6.4. > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message ---------- > > > > From: QI Congyun <congyun...@alcatel-sbell.com.cn> > > > > To: "hbase-...@lists.apache.org" <hbase-...@lists.apache.org> > > > > Cc: > > > > Date: Fri, 11 Nov 2016 02:01:58 +0000 > > > > Subject: FW: my questions are always not resolved about hbase > > > > > > > > > > > > > > > > The E-mail can’t be sent to the destination e-mail box, resent > > > > it > > again. > > > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > *From:* QI Congyun > > > > *Sent:* Thursday, November 10, 2016 11:53 AM > > > > *To:* 'hbase-...@lists.apache.org' > > > > *Subject:* my questions are always not resolved about hbase > > > > > > > > > > > > > > > > Hello sir, > > > > > > > > > > > > > > > > So sorry to bather you, I’m interested in the Hadoop system, and > > > > attempted to use Hadoop and Hbase, but the Hbase issue can’t be > > > > resolved, could you help me? Thanks in advance. > > > > > > > > > > > > > > > > 1. I’m very bewildered why the same issue is always encountered > > > > when launching hbase each time, the raised information is > > > > attached as > > > > follows: > > > > > > > > > > > > > > > > *[hadoop@hadoop2 hbase-1.2.3]$ * > > > > > > > > *[hadoop@hadoop2 hbase-1.2.3]$ bin/start-hbase.sh * > > > > > > > > *localhost: starting zookeeper, logging to > > > > /home/hadoop/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-hadoop2. > > > > ou > > > > t* > > > > > > > > *localhost: java.io.IOException: Unable to create data dir > > > > /home/testuser/zookeeper* > > > > > > > > *localhost: at > > > > org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer. > > > > ja > > > > va:157)* > > > > > > > > *localhost: at > > > > org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java: > > > > 70 > > > > )* > > > > > > > > *starting master, logging to > > > > /home/hadoop/hbase-1.2.3/logs/hbase-hadoop-master-hadoop2.out* > > > > > > > > *starting regionserver, logging to > > > > /home/hadoop/hbase-1.2.3/logs/hbase-hadoop-1-regionserver-hadoop2. > > > > ou > > > > t* > > > > > > > > …………… > > > > > > > > *[hadoop@hadoop2 hbase-1.2.3]$ jps* > > > > > > > > *11805 SecondaryNameNode* > > > > > > > > *11614 DataNode* > > > > > > > > *507 NodeManager* > > > > > > > > *30687 HRegionServer* > > > > > > > > *385 ResourceManager* > > > > > > > > *11379 NameNode* > > > > > > > > *30899 Jps* > > > > > > > > *……………………..* > > > > > > > > *[hadoop@hadoop2 hbase-1.2.3]$ bin/stop-hbase.sh * > > > > > > > > *stopping hbasecat: /tmp/hbase-hadoop-master.pid: No such file > > > > or > > > > directory* > > > > > > > > > > > > > > > > *localhost: no zookeeper to stop because no pid file > > > > /tmp/hbase-hadoop-zookeeper.pid* > > > > > > > > > > > > > > > > 2. When I check the logs, and a fatal errors are raised once > > again, > > > > but I don’t know why. > > > > > > > > > > > > > > > > 2016-11-10 11:25:14,177 INFO [main-SendThread(localhost:2181)] > > > > zookeeper.ClientCnxn: Opening socket connection to server > > > > localhost/ 127.0.0.1:2181. Will not attempt to authenticate > > > > using SASL (unknown > > > > error) > > > > > > > > 2016-11-10 11:25:14,181 WARN [main-SendThread(localhost:2181)] > > > > zookeeper.ClientCnxn: Session 0x0 for server null, unexpected > > > > error, closing socket connection and attempting reconnect > > > > > > > > java.net.ConnectException: Connection refused > > > > > > > > at > > > > sun.nio.ch.SocketChannelImpl.checkConnect(Native > > > > Method) > > > > > > > > at sun.nio.ch.SocketChannelImpl.finishConnect( > > > > SocketChannelImpl.java:739) > > > > > > > > at > > > > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport( > > > > ClientCnxnSocketNIO.java:361) > > > > > > > > at org.apache.zookeeper.ClientCnxn$SendThread.run( > > > > ClientCnxn.java:1081) > > > > > > > > 2016-11-10 11:25:14,291 INFO [main-SendThread(localhost:2181)] > > > > zookeeper.ClientCnxn: Opening socket connection to server > > > > localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate > > > > using SASL (unknown error) > > > > > > > > 2016-11-10 11:25:14,294 WARN [main-SendThread(localhost:2181)] > > > > zookeeper.ClientCnxn: Session 0x0 for server null, unexpected > > > > error, closing socket connection and attempting reconnect > > > > > > > > java.net.ConnectException: Connection refused > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >