bq. dataDir=/tmp/zookeeper

When machine restarts, you would lose the data, right ?
Please change to other directory.

Was zookeeper.out from slave2 ?

Please check port 3333 on 192.168.1.3 <http://192.168.1.3:3333/>

On Wed, Apr 20, 2016 at 6:22 AM, Eric Gao <gaoqiang...@163.com> wrote:

> Dear Ted,
> Thank you for your kind attention.
> I'm a complete novice at Hadoop. ^___^
>
>
>
> I have 3 host---1 master server and 2 data node,and zookeeper is installed on 
> each server.
> Zookeeper and Hbase are still unable to start.
>
> Each server has the same status:
>
> [root@master data]# /opt/zookeeper/bin/zkServer.sh start
> ZooKeeper JMX enabled by default
> Using config: /opt/zookeeper/bin/../conf/zoo.cfg
> Starting zookeeper ... date
> STARTED
> [root@slave2 ~]# /opt/zookeeper/bin/zkServer.sh status
> ZooKeeper JMX enabled by default
> Using config: /opt/zookeeper/bin/../conf/zoo.cfg
> Error contacting service. It is probably not running.
>
>
> Here are some of my configuration infomation:
>
>
> *hbase-site.xml:*
> <configuration>
> <property>
> <name>hbase.rootdir</name>
> <value>hdfs://master:9000/hbase/data</value>
> </property>
> <property>
> <name>hbase.cluster.distributed</name>
> <value>true</value>
> </property>
>
> <property>
> <name>zookeeper.znode.parent</name>
> <value>/hbase</value>
> <description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
> files that are configured with a relative path will go under this node.
> By default, all of HBase's ZooKeeper file path are configured with a
> relative path, so they will all go under this directory unless changed.
> </description>
> </property>
>
>
> <property>
> <name>hbase.zookeeper.quorum</name>
> <value>master,slave1,slave2</value>
>
> <description>Comma separated list of servers in the ZooKeeper Quorum. For 
> example, "
> host1.mydomain.com,host2.mydomain.com,host3.mydomain.com
> ". By default this is set to localhost for local and pseudo-distributed modes 
> of operation. For a fully-distributed setup, this should be set to a full 
> list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh 
> this is the list of servers which we will start/stop ZooKeeper on. 
> </description>
> </property>
> <property>
> <name>hbase.zookeeper.property.dataDir</name>
> <value>/opt/zookeeper/data</value>
>
> <description>Property from ZooKeeper's config zoo.cfg. The directory where 
> the snapshot is stored. </description>
> </property>
> </configuration>
>
> *zoo.cfg:*
> tickTime=2000
> initLimit=10
> syncLimit=5
> dataDir=/tmp/zookeeper
> clientPort=2181
> server.1=192.168.1.2:2222:3333
> server.2=192.168.1.3:2222:3333
> server.3=192.168.1.4:2222:3333
> dataDir=/opt/zookeeper/data
> dataLogDir=/opt/zookeeper/log
>
> [root@master ~]# ls -lt /opt/zookeeper/
> total 1572
> drwxrwxr-x  3 hadoop hadoop      53 Apr 19 08:32 data
> drwxrwxr-x  3 hadoop hadoop      22 Apr 17 14:25 log
> drwxr-xr-x  2 hadoop hadoop    4096 Apr 17 14:25 bin
> -rw-r--r--  1 root   root       133 Apr 17 12:49 zookeeper.out
> drwxr-xr-x  2 hadoop hadoop      67 Apr 14 09:07 conf
> drwxr-xr-x  2 hadoop hadoop    4096 Feb  5 22:50 dist-maven
> -rw-rw-r--  1 hadoop hadoop     819 Feb  5 22:50 zookeeper-3.4.8.jar.asc
> drwxr-xr-x  6 hadoop hadoop    4096 Feb  5 22:49 docs
> drwxr-xr-x  4 hadoop hadoop    4096 Feb  5 22:49 lib
> drwxr-xr-x  8 hadoop hadoop    4096 Feb  5 22:49 src
> -rw-rw-r--  1 hadoop hadoop 1360961 Feb  5 22:46 zookeeper-3.4.8.jar
> -rw-rw-r--  1 hadoop hadoop      33 Feb  5 22:46 zookeeper-3.4.8.jar.md5
> -rw-rw-r--  1 hadoop hadoop      41 Feb  5 22:46 zookeeper-3.4.8.jar.sha1
> -rw-rw-r--  1 hadoop hadoop   83235 Feb  5 22:46 build.xml
> -rw-rw-r--  1 hadoop hadoop   88625 Feb  5 22:46 CHANGES.txt
> -rw-rw-r--  1 hadoop hadoop    1953 Feb  5 22:46 ivysettings.xml
> -rw-rw-r--  1 hadoop hadoop    3498 Feb  5 22:46 ivy.xml
> -rw-rw-r--  1 hadoop hadoop   11938 Feb  5 22:46 LICENSE.txt
> -rw-rw-r--  1 hadoop hadoop     171 Feb  5 22:46 NOTICE.txt
> -rw-rw-r--  1 hadoop hadoop    1770 Feb  5 22:46 README_packaging.txt
> -rw-rw-r--  1 hadoop hadoop    1585 Feb  5 22:46 README.txt
> drwxr-xr-x  5 hadoop hadoop      44 Feb  5 22:46 recipes
> drwxr-xr-x 10 hadoop hadoop     122 Feb  5 22:46 contrib
>
>
>
> *zookeeper.out:*
> 2016-04-19 07:17:15,684 [myid:] - INFO  [main:QuorumPeerConfig@103
> ] - Reading configuration from: /opt/zookeeper/bin/../conf/zoo.cfg
> 2016-04-19 07:17:15,772 [myid:] - INFO  [main:QuorumPeer$QuorumServer@149
> ] - Resolved hostname: 192.168.1.4 to address: /192.168.1.4
> 2016-04-19 07:17:15,773 [myid:] - INFO  [main:QuorumPeer$QuorumServer@149
> ] - Resolved hostname: 192.168.1.3 to address: /192.168.1.3
> 2016-04-19 07:17:15,773 [myid:] - INFO  [main:QuorumPeer$QuorumServer@149
> ] - Resolved hostname: 192.168.1.2 to address: /192.168.1.2
> 2016-04-19 07:17:15,773 [myid:] - INFO  [main:QuorumPeerConfig@331
> ] - Defaulting to majority quorums
> 2016-04-19 07:17:15,790 [myid:1] - INFO  [main:DatadirCleanupManager@78
> ] - autopurge.snapRetainCount set to 3
> 2016-04-19 07:17:15,791 [myid:1] - INFO  [main:DatadirCleanupManager@79
> ] - autopurge.purgeInterval set to 0
> 2016-04-19 07:17:15,791 [myid:1] - INFO  [main:DatadirCleanupManager@101
> ] - Purge task is not scheduled.
> 2016-04-19 07:17:15,803 [myid:1] - INFO  [main:QuorumPeerMain@127
> ] - Starting quorum peer
> 2016-04-19 07:17:15,859 [myid:1] - INFO  [main:NIOServerCnxnFactory@89
> ] - binding to port 0.0.0.0/0.0.0.0:2181
> 2016-04-19 07:17:15,870 [myid:1] - INFO  [main:QuorumPeer@1019
> ] - tickTime set to 2000
> 2016-04-19 07:17:15,870 [myid:1] - INFO  [main:QuorumPeer@1039
> ] - minSessionTimeout set to -1
> 2016-04-19 07:17:15,870 [myid:1] - INFO  [main:QuorumPeer@1050
> ] - maxSessionTimeout set to -1
> 2016-04-19 07:17:15,871 [myid:1] - INFO  [main:QuorumPeer@1065
> ] - initLimit set to 10
>
> 2016-04-19 07:17:15,908 [myid:1] - INFO  
> [ListenerThread:QuorumCnxManager$Listener@534
> ] - My election bind port: /192.168.1.2:3333
>
> 2016-04-19 07:17:15,919 [myid:1] - INFO  
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@774
> ] - LOOKING
>
> 2016-04-19 07:17:15,926 [myid:1] - INFO  
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@818
> ] - New election. My id =  1, proposed zxid=0x0
>
> 2016-04-19 07:17:15,929 [myid:1] - INFO  
> [WorkerReceiver[myid=1]:FastLeaderElection@600
> ] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxi
>
> d), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING 
> (my state)
>
> 2016-04-19 07:17:15,949 [myid:1] - WARN  
> [WorkerSender[myid=1]:QuorumCnxManager@400
> ] - Cannot open channel to 2 at election address /192.168.1.3:3333
> java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
>
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
>
> at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
>
> at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354)
>
> at 
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452)
>
> at 
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433)
> at java.lang.Thread.run(Thread.java:745)
>
> 2016-04-19 07:17:15,951 [myid:1] - INFO  
> [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149
> ] - Resolved hostname: 192.168.1.3 to address: /192.168.1.3
>
> 2016-04-19 07:17:15,952 [myid:1] - WARN  
> [WorkerSender[myid=1]:QuorumCnxManager@400
> ] - Cannot open channel to 3 at election address /192.168.1.4:3333
> java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
>
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
>
> at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
>
> at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354)
>
> at 
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452)
>
> at 
> org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433)
> at java.lang.Thread.run(Thread.java:745)
>
> 2016-04-19 07:17:15,953 [myid:1] - INFO  
> [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149
> ] - Resolved hostname: 192.168.1.4 to address: /192.168.1.4
>
> 2016-04-19 07:17:16,132 [myid:1] - WARN  
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400
> ] - Cannot open channel to 2 at election address /192.1
> 68.1.3:3333
> java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
>
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
>
> at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
>
> at 
> org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
>
> at 
> org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
> at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822)
>
>
>
>
> *core-site.xml:*
> <configuration>
> <property>
>         <name>hadoop.tmp.dir</name>
>         <value>/opt/hadoop/tmp</value>
>         <description>A base for other temporary directories.</description>
>     </property>
> <!-- file system properties -->
>     <property>
>         <name>fs.default.name</name>
>         <value>hdfs://master:9000</value>
>     </property>
> </configuration>
>
> Please help to check the problem. Thanks a lot!
>
> ------------------------------
> *Eric Gao*
> Keep on going never give up.
> *Blog:*
> http://gaoqiang.blog.chinaunix.net/
> http://gaoqiangdba.blog.163.com/
>
>
>
> *From:* Ted Yu <yuzhih...@gmail.com>
> *Date:* 2016-04-16 23:21
> *To:* user@hbase.apache.org
> *Subject:* Re: Re: ERROR [main]
> client.ConnectionManager$HConnectionImplementation: The node /hbase is not
> in ZooKeeper.
> Can you verify that hbase is running by logging onto master node and check
> the Java processes ?
>
> If master is running, can you do a listing of the zookeeper znode (using
> zkCli) and pastebin the result ?
>
> Thanks
>
> On Sat, Apr 16, 2016 at 8:14 AM, Eric Gao <gaoqiang...@163.com> wrote:
>
> > Yes,I have seen your reply.Thanks very much for your kindness.
> >
> > This is my hbase-site.xml:
> > <configuration>
> > <property>
> > <name>hbase.rootdir</name>
> > <value>hdfs://master:9000/hbase/data</value>
> > </property>
> > <property>
> > <name>hbase.cluster.distributed</name>
> > <value>true</value>
> > </property>
> >
> > <property>
> >     <name>zookeeper.znode.parent</name>
> >     <value>/hbase</value>
> >     <description>Root ZNode for HBase in ZooKeeper. All of HBase's
> > ZooKeeper
> >       files that are configured with a relative path will go under this
> > node.
> >       By default, all of HBase's ZooKeeper file path are configured with
> a
> >       relative path, so they will all go under this directory unless
> > changed.
> >     </description>
> >   </property>
> >
> >
> > <property>
> > <name>hbase.zookeeper.quorum</name>
> > <value>master,slave1,slave2</value>
> > <description>Comma separated list of servers in the ZooKeeper Quorum. For
> > example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By
> > default this is set to localhost for local and pseudo-distributed modes
> of
> > operation. For a fully-distributed setup, this should be set to a full
> list
> > of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
> > this is the list of servers which we will start/stop ZooKeeper on.
> > </description>
> > </property>
> > <property>
> > <name>hbase.zookeeper.property.dataDir</name>
> > <value>/opt/zookeeper/data</value>
> > <description>Property from ZooKeeper's config zoo.cfg. The directory
> where
> > the snapshot is stored. </description>
> > </property>
> > </configuration>
> >
> > This is my hbase-env.sh:
> >
> >
> > [root@master ~]# cat /opt/hbase/conf/hbase-env.sh
> > #
> > #/**
> > # * Licensed to the Apache Software Foundation (ASF) under one
> > # * or more contributor license agreements.  See the NOTICE file
> > # * distributed with this work for additional information
> > # * regarding copyright ownership.  The ASF licenses this file
> > # * to you under the Apache License, Version 2.0 (the
> > # * "License"); you may not use this file except in compliance
> > # * with the License.  You may obtain a copy of the License at
> > # *
> > # *     http://www.apache.org/licenses/LICENSE-2.0
> > # *
> > # * Unless required by applicable law or agreed to in writing, software
> > # * distributed under the License is distributed on an "AS IS" BASIS,
> > # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> > implied.
> > # * See the License for the specific language governing permissions and
> > # * limitations under the License.
> > # */
> >
> > # Set environment variables here.
> >
> > # This script sets variables multiple times over the course of starting
> an
> > hbase process,
> > # so try to keep things idempotent unless you want to take an even deeper
> > look
> > # into the startup scripts (bin/hbase, etc.)
> >
> > # The java implementation to use.  Java 1.7+ required.
> > export JAVA_HOME=/usr
> >
> > # Extra Java CLASSPATH elements.  Optional.
> >  export HBASE_CLASSPATH=/opt/hadoop
> >
> > # The maximum amount of heap to use. Default is left to JVM default.
> > # export HBASE_HEAPSIZE=1G
> >
> > # Uncomment below if you intend to use off heap cache. For example, to
> > allocate 8G of
> > # offheap, set the value to "8G".
> > # export HBASE_OFFHEAPSIZE=1G
> >
> > # Extra Java runtime options.
> > # Below are what we set by default.  May only work with SUN JVM.
> > # For more on why as well as other possible settings,
> > # see http://wiki.apache.org/hadoop/PerformanceTuning
> > export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
> >
> > # Uncomment one of the below three options to enable java garbage
> > collection logging for the server-side processes.
> >
> > # This enables basic gc logging to the .out file.
> > # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails
> > -XX:+PrintGCDateStamps"
> >
> > # This enables basic gc logging to its own file.
> > # If FILE-PATH is not replaced, the log file(.gc) would still be
> generated
> > in the HBASE_LOG_DIR .
> > # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails
> > -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
> >
> > # This enables basic GC logging to its own file with automatic log
> > rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
> > # If FILE-PATH is not replaced, the log file(.gc) would still be
> generated
> > in the HBASE_LOG_DIR .
> > # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails
> > -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation
> > -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
> >
> > # Uncomment one of the below three options to enable java garbage
> > collection logging for the client processes.
> >
> > # This enables basic gc logging to the .out file.
> > # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails
> > -XX:+PrintGCDateStamps"
> >
> > # This enables basic gc logging to its own file.
> > # If FILE-PATH is not replaced, the log file(.gc) would still be
> generated
> > in the HBASE_LOG_DIR .
> > # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails
> > -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
> >
> > # This enables basic GC logging to its own file with automatic log
> > rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
> > # If FILE-PATH is not replaced, the log file(.gc) would still be
> generated
> > in the HBASE_LOG_DIR .
> > # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails
> > -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation
> > -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
> >
> > # See the package documentation for org.apache.hadoop.hbase.io.hfile for
> > other configurations
> > # needed setting up off-heap block caching.
> >
> > # Uncomment and adjust to enable JMX exporting
> > # See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management
> > to configure remote password access.
> > # More details at:
> > http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
> > # NOTE: HBase provides an alternative JMX implementation to fix the
> random
> > ports issue, please see JMX
> > # section in HBase Reference Guide for instructions.
> >
> > # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false
> > -Dcom.sun.management.jmxremote.authenticate=false"
> > # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE
> > -Dcom.sun.management.jmxremote.port=10101"
> > # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS
> $HBASE_JMX_BASE
> > -Dcom.sun.management.jmxremote.port=10102"
> > # export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE
> > -Dcom.sun.management.jmxremote.port=10103"
> > # export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE
> > -Dcom.sun.management.jmxremote.port=10104"
> > # export HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE
> > -Dcom.sun.management.jmxremote.port=10105"
> >
> > # File naming hosts on which HRegionServers will run.
> > $HBASE_HOME/conf/regionservers by default.
> > # export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
> >
> > # Uncomment and adjust to keep all the Region Server pages mapped to be
> > memory resident
> > #HBASE_REGIONSERVER_MLOCK=true
> > #HBASE_REGIONSERVER_UID="hbase"
> >
> > # File naming hosts on which backup HMaster will run.
> > $HBASE_HOME/conf/backup-masters by default.
> > # export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
> >
> > # Extra ssh options.  Empty by default.
> > # export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
> >
> > # Where log files are stored.  $HBASE_HOME/logs by default.
> > # export HBASE_LOG_DIR=${HBASE_HOME}/logs
> >
> > # Enable remote JDWP debugging of major HBase processes. Meant for Core
> > Developers
> > # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
> > # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
> > # export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
> > # export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
> >
> > # A string representing this instance of hbase. $USER by default.
> > # export HBASE_IDENT_STRING=$USER
> >
> > # The scheduling priority for daemon processes.  See 'man nice'.
> > # export HBASE_NICENESS=10
> >
> > # The directory where pid files are stored. /tmp by default.
> > # export HBASE_PID_DIR=/var/hadoop/pids
> >
> > # Seconds to sleep between slave commands.  Unset by default.  This
> > # can be useful in large clusters, where, e.g., slave rsyncs can
> > # otherwise arrive faster than the master can service them.
> > # export HBASE_SLAVE_SLEEP=0.1
> >
> > # Tell HBase whether it should manage it's own instance of Zookeeper or
> > not.
> >  export HBASE_MANAGES_ZK=true
> >
> > # The default log rolling policy is RFA, where the log file is rolled as
> > per the size defined for the
> > # RFA appender. Please refer to the log4j.properties file to see more
> > details on this appender.
> > # In case one needs to do log rolling on a date change, one should set
> the
> > environment property
> > # HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
> > # For example:
> > # HBASE_ROOT_LOGGER=INFO,DRFA
> > # The reason for changing default to RFA is to avoid the boundary case of
> > filling out disk space as
> > # DRFA doesn't put any cap on the log size. Please refer to HBase-5655
> for
> > more context.
> >
> >
> >
> > And this is my env:
> > [hadoop@master ~]$ env
> > XDG_SESSION_ID=1
> > HOSTNAME=master
> > SHELL=/bin/bash
> > TERM=xterm
> > HADOOP_HOME=/opt/hadoop
> > HISTSIZE=1000
> > USER=hadoop
> >
> >
> LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
> > MAIL=/var/spool/mail/hadoop
> >
> >
> PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/hadoop/bin:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/bin:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/jre/bin:/usr/bin:/home/hadoop/.local/bin:/home/hadoop/bin
> > PWD=/home/hadoop
> > JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64
> > LANG=en_US.UTF-8
> > HISTCONTROL=ignoredups
> > SHLVL=1
> > HOME=/home/hadoop
> > LOGNAME=hadoop
> >
> >
> CLASSPATH=.::/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/lib:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/jre/lib
> > LESSOPEN=||/usr/bin/lesspipe.sh %s
> > _=/bin/env
> >
> >
> >
> > Eric Gao
> > Keep on going never give up.
> > Blog:
> > http://gaoqiang.blog.chinaunix.net/
> > http://gaoqiangdba.blog.163.com/
> >
> >
> >
> > From: Ted Yu
> > Date: 2016-04-16 22:59
> > To: user@hbase.apache.org
> > Subject: Re: ERROR [main]
> > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> not
> > in ZooKeeper.
> > Have you seen my reply ?
> >
> > http://search-hadoop.com/m/q3RTtJHewi1jOgc21
> >
> > The actual value for zookeeper.znode.parent could be /hbase-secure (just
> an
> > example).
> >
> > Make sure the correct hbase-site,xml is in classpath for hbase shell.
> >
> > On Sat, Apr 16, 2016 at 7:53 AM, Eric Gao <gaoqiang...@163.com> wrote:
> >
> > > Dear expert,
> > >   I have encountered a problem,when I run hbase cmd :status it shows:
> > >
> > > hbase(main):001:0> status
> > > 2016-04-16 13:03:02,333 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > > 2016-04-16 13:03:02,538 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > > 2016-04-16 13:03:02,843 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > > 2016-04-16 13:03:03,348 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > > 2016-04-16 13:03:04,355 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > > 2016-04-16 13:03:06,369 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > > 2016-04-16 13:03:10,414 ERROR [main]
> > > client.ConnectionManager$HConnectionImplementation: The node /hbase is
> > not
> > > in ZooKeeper. It should have been written by the master. Check the
> value
> > > configured in 'zookeeper.znode.parent'. There could be a mismatch with
> > the
> > > one configured in the master.
> > >
> > > How can I solve the problem?
> > > Thanks very much
> > >
> > >
> > >
> > > Eric Gao
> > > Keep on going never give up.
> > > Blog:
> > > http://gaoqiang.blog.chinaunix.net/
> > > http://gaoqiangdba.blog.163.com/
> > >
> > >
> > >
> >
>
>

Reply via email to