Hey Alexandr,

What does your hbase-site and hdfs-site look like? Wanna upload them to
Gist or something similar and then paste a link?

-Dima

On Friday, August 5, 2016, Alexandr Porunov <alexandr.poru...@gmail.com>
wrote:

> Hello Dima,
>
> Thank you for advice. But the problem haven't disappeared. When I start
> HMaster on nn1 and nn2 nodes they work but when I try to connect to the nn1
> (http://nn1:16010/) HMaster on nn1 crashes. HMaster on nn2 continue be
> available via http://nn2:16010/ . Don't you know why it is happens?
>
> Here is my logs from nn1:
>
> Fri Aug  5 13:04:20 EEST 2016 Starting master on nn1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 3904
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 3904
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2016-08-05 13:04:21,531 INFO  [main] util.VersionInfo: HBase 1.1.5
> 2016-08-05 13:04:21,531 INFO  [main] util.VersionInfo: Source code
> repository git://diocles.local/Volumes/hbase-1.1.5/hbase
> revision=239b80456118175b340b2e562a5568b5c744252e
> 2016-08-05 13:04:21,531 INFO  [main] util.VersionInfo: Compiled by ndimiduk
> on Sun May  8 20:29:26 PDT 2016
> 2016-08-05 13:04:21,531 INFO  [main] util.VersionInfo: From source with
> checksum 7ad8dc6c5daba19e4aab081181a2457d
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:
> /usr/java/default/bin:/usr/hadoop/bin:/usr/hadoop/sbin:/usr/hadoop/bin
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HISTCONTROL=ignoredups
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HBASE_REGIONSERVER_OPTS= -XX:PermSize=128m -XX:MaxPermSize=128m
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:MAIL=/var/spool/mail/hadoop
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:LD_LIBRARY_PATH=:/usr/hadoop/lib
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:LOGNAME=hadoop
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HBASE_REST_OPTS=
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:PWD=/usr/hadoop
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HADOOP_PREFIX=/usr/hadoop
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HBASE_ROOT_LOGGER=INFO,RFA
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:LESSOPEN=||/usr/bin/lesspipe.sh %s
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:SHELL=/bin/bash
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HBASE_ENV_INIT=true
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HBASE_MASTER_OPTS= -XX:PermSize=128m -XX:MaxPermSize=128m
> 2016-08-05 13:04:22,244 INFO  [main] util.ServerCommandLine:
> env:HBASE_MANAGES_ZK=false
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:HBASE_NICENESS=0
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:HBASE_OPTS=-XX:+UseConcMarkSweepGC   -XX:PermSize=128m
> -XX:MaxPermSize=128m -Dhbase.log.dir=/usr/hbase/bin/../logs
> -Dhbase.log.file=hbase-hadoop-master-nn1.log
> -Dhbase.home.dir=/usr/hbase/bin/.. -Dhbase.id.str=hadoop
> -Dhbase.root.logger=INFO,RFA -Djava.library.path=/usr/hadoop/lib
> -Dhbase.security.logger=INFO,RFAS
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:HBASE_START_FILE=/tmp/hbase-hadoop-master.autorestart
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:HBASE_SECURITY_LOGGER=INFO,RFAS
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;
> 35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;
> 41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=
> 01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=
> 01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.
> tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:
> *.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.
> lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.
> tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:
> *.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:
> *.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:
> *.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;
> 35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=
> 01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.
> png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:
> *.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=
> 01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.
> mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:
> *.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:
> *.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*
> .xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:
> *.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*
> .flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=
> 01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=
> 01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine: env:SHLVL=4
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:HBASE_LOGFILE=hbase-hadoop-master-nn1.log
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:HISTSIZE=1000
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:JAVA_HOME=/usr/java/default/
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine: env:TERM=xterm
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:LANG=en_US.UTF-8
> 2016-08-05 13:04:22,245 INFO  [main] util.ServerCommandLine:
> env:XDG_SESSION_ID=2
> 2016-08-05 13:04:22,249 INFO  [main] util.ServerCommandLine:
> env:YARN_HOME=/usr/hadoop
> 2016-08-05 13:04:22,249 INFO  [main] util.ServerCommandLine:
> env:HADOOP_HDFS_HOME=/usr/hadoop
> 2016-08-05 13:04:22,249 INFO  [main] util.ServerCommandLine:
> env:HADOOP_MAPRED_HOME=/usr/hadoop
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HADOOP_COMMON_HOME=/usr/hadoop
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HADOOP_OPTS=-Djava.library.path=/usr/hadoop/lib
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HBASE_IDENT_STRING=hadoop
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HBASE_ZNODE_FILE=/tmp/hbase-hadoop-master.znode
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:SSH_TTY=/dev/pts/0
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:SSH_CLIENT=192.168.0.132 37633 22
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HBASE_LOG_PREFIX=hbase-hadoop-master-nn1
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HBASE_LOG_DIR=/usr/hbase/bin/../logs
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:USER=hadoop
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:CLASSPATH=/usr/hbase/bin/../conf:/usr/java/default//lib/
> tools.jar:/usr/hbase/bin/..:/usr/hbase/bin/../lib/
> activation-1.1.jar:/usr/hbase/bin/../lib/antisamy-1.4.3.jar:
> /usr/hbase/bin/../lib/aopalliance-1.0.jar:/usr/hbase/bin/../lib/apacheds-
> i18n-2.0.0-M15.jar:/usr/hbase/bin/../lib/apacheds-kerberos-
> codec-2.0.0-M15.jar:/usr/hbase/bin/../lib/api-asn1-api-
> 1.0.0-M20.jar:/usr/hbase/bin/../lib/api-util-1.0.0-M20.jar:/
> usr/hbase/bin/../lib/asm-3.1.jar:/usr/hbase/bin/../lib/
> avro-1.7.4.jar:/usr/hbase/bin/../lib/batik-css-1.7.jar:/usr/
> hbase/bin/../lib/batik-ext-1.7.jar:/usr/hbase/bin/../lib/
> batik-util-1.7.jar:/usr/hbase/bin/../lib/bsh-core-2.0b4.jar:
> /usr/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/usr/
> hbase/bin/../lib/commons-beanutils-core-1.7.0.jar:/usr/
> hbase/bin/../lib/commons-cli-1.2.jar:/usr/hbase/bin/../lib/
> commons-codec-1.9.jar:/usr/hbase/bin/../lib/commons-
> collections-3.2.2.jar:/usr/hbase/bin/../lib/commons-
> compress-1.4.1.jar:/usr/hbase/bin/../lib/commons-
> configuration-1.6.jar:/usr/hbase/bin/../lib/commons-
> daemon-1.0.13.jar:/usr/hbase/bin/../lib/commons-digester-1.
> 8.jar:/usr/hbase/bin/../lib/commons-el-1.0.jar:/usr/hbase/
> bin/../lib/commons-fileupload-1.2.jar:/usr/hbase/bin/../lib/
> commons-httpclient-3.1.jar:/usr/hbase/bin/../lib/commons-
> io-2.4.jar:/usr/hbase/bin/../lib/commons-lang-2.6.jar:/usr/
> hbase/bin/../lib/commons-logging-1.2.jar:/usr/hbase/
> bin/../lib/commons-math-2.2.jar:/usr/hbase/bin/../lib/
> commons-math3-3.1.1.jar:/usr/hbase/bin/../lib/commons-net-
> 3.1.jar:/usr/hbase/bin/../lib/disruptor-3.3.0.jar:/usr/
> hbase/bin/../lib/esapi-2.1.0.jar:/usr/hbase/bin/../lib/
> findbugs-annotations-1.3.9-1.jar:/usr/hbase/bin/../lib/
> guava-12.0.1.jar:/usr/hbase/bin/../lib/guice-3.0.jar:/usr/
> hbase/bin/../lib/guice-servlet-3.0.jar:/usr/hbase/
> bin/../lib/hadoop-annotations-2.5.1.jar:/usr/hbase/bin/../
> lib/hadoop-auth-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-
> client-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-common-2.5.
> 1.jar:/usr/hbase/bin/../lib/hadoop-hdfs-2.5.1.jar:/usr/
> hbase/bin/../lib/hadoop-mapreduce-client-app-2.5.1.
> jar:/usr/hbase/bin/../lib/hadoop-mapreduce-client-
> common-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-mapreduce-
> client-core-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-
> mapreduce-client-jobclient-2.5.1.jar:/usr/hbase/bin/../lib/
> hadoop-mapreduce-client-shuffle-2.5.1.jar:/usr/hbase/
> bin/../lib/hadoop-yarn-api-2.5.1.jar:/usr/hbase/bin/../lib/
> hadoop-yarn-client-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-
> yarn-common-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-yarn-
> server-common-2.5.1.jar:/usr/hbase/bin/../lib/hbase-
> annotations-1.1.5.jar:/usr/hbase/bin/../lib/hbase-
> annotations-1.1.5-tests.jar:/usr/hbase/bin/../lib/hbase-
> client-1.1.5.jar:/usr/hbase/bin/../lib/hbase-common-1.1.5.
> jar:/usr/hbase/bin/../lib/hbase-common-1.1.5-tests.jar:/
> usr/hbase/bin/../lib/hbase-examples-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-hadoop2-compat-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-hadoop-compat-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-it-1.1.5.jar:/usr/hbase/bin/../lib/hbase-
> it-1.1.5-tests.jar:/usr/hbase/bin/../lib/hbase-prefix-tree-
> 1.1.5.jar:/usr/hbase/bin/../lib/hbase-procedure-1.1.5.jar:
> /usr/hbase/bin/../lib/hbase-protocol-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-resource-bundle-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-rest-1.1.5.jar:/usr/hbase/bin/../lib/
> hbase-server-1.1.5.jar:/usr/hbase/bin/../lib/hbase-server-
> 1.1.5-tests.jar:/usr/hbase/bin/../lib/hbase-shell-1.1.5.
> jar:/usr/hbase/bin/../lib/hbase-thrift-1.1.5.jar:/usr/
> hbase/bin/../lib/htrace-core-3.1.0-incubating.jar:/usr/
> hbase/bin/../lib/httpclient-4.2.5.jar:/usr/hbase/bin/../lib/
> httpcore-4.1.3.jar:/usr/hbase/bin/../lib/jackson-core-asl-1.
> 9.13.jar:/usr/hbase/bin/../lib/jackson-jaxrs-1.9.13.jar:/
> usr/hbase/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/
> hbase/bin/../lib/jackson-xc-1.9.13.jar:/usr/hbase/bin/../
> lib/jamon-runtime-2.3.1.jar:/usr/hbase/bin/../lib/jasper-
> compiler-5.5.23.jar:/usr/hbase/bin/../lib/jasper-
> runtime-5.5.23.jar:/usr/hbase/bin/../lib/javax.inject-1.jar:
> /usr/hbase/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hbase/
> bin/../lib/jaxb-api-2.2.2.jar:/usr/hbase/bin/../lib/jaxb-
> impl-2.2.3-1.jar:/usr/hbase/bin/../lib/jcodings-1.0.8.jar:
> /usr/hbase/bin/../lib/jersey-client-1.9.jar:/usr/hbase/bin/
> ../lib/jersey-core-1.9.jar:/usr/hbase/bin/../lib/jersey-
> guice-1.9.jar:/usr/hbase/bin/../lib/jersey-json-1.9.jar:/
> usr/hbase/bin/../lib/jersey-server-1.9.jar:/usr/hbase/bin/
> ../lib/jets3t-0.9.0.jar:/usr/hbase/bin/../lib/jettison-1.3.
> 3.jar:/usr/hbase/bin/../lib/jetty-6.1.26.jar:/usr/hbase/
> bin/../lib/jetty-sslengine-6.1.26.jar:/usr/hbase/bin/../
> lib/jetty-util-6.1.26.jar:/usr/hbase/bin/../lib/joni-2.1.
> 2.jar:/usr/hbase/bin/../lib/jruby-complete-1.6.8.jar:/usr/
> hbase/bin/../lib/jsch-0.1.42.jar:/usr/hbase/bin/../lib/jsp-
> 2.1-6.1.14.jar:/usr/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:
> /usr/hbase/bin/../lib/jsr305-1.3.9.jar:/usr/hbase/bin/../
> lib/junit-4.12.jar:/usr/hbase/bin/../lib/leveldbjni-all-1.8.
> jar:/usr/hbase/bin/../lib/libthrift-0.9.0.jar:/usr/
> hbase/bin/../lib/log4j-1.2.17.jar:/usr/hbase/bin/../lib/
> metrics-core-2.2.0.jar:/usr/hbase/bin/../lib/nekohtml-1.9.
> 12.jar:/usr/hbase/bin/../lib/netty-3.2.4.Final.jar:/usr/
> hbase/bin/../lib/netty-all-4.0.23.Final.jar:/usr/hbase/bin/
> ../lib/paranamer-2.3.jar:/usr/hbase/bin/../lib/protobuf-
> java-2.5.0.jar:/usr/hbase/bin/../lib/servlet-api-2.5-6.1.14.
> jar:/usr/hbase/bin/../lib/servlet-api-2.5.jar:/usr/
> hbase/bin/../lib/slf4j-api-1.7.7.jar:/usr/hbase/bin/../lib/
> slf4j-log4j12-1.7.5.jar:/usr/hbase/bin/../lib/snappy-java-
> 1.0.4.1.jar:/usr/hbase/bin/../lib/spymemcached-2.11.6.jar:/
> usr/hbase/bin/../lib/xalan-2.7.0.jar:/usr/hbase/bin/../lib/
> xml-apis-1.3.03.jar:/usr/hbase/bin/../lib/xml-apis-ext-
> 1.3.04.jar:/usr/hbase/bin/../lib/xmlenc-0.52.jar:/usr/
> hbase/bin/../lib/xom-1.2.5.jar:/usr/hbase/bin/../lib/xz-
> 1.0.jar:/usr/hbase/bin/../lib/zookeeper-3.4.6.jar:/usr/
> hadoop/etc/hadoop:/usr/hadoop/share/hadoop/common/lib/*:/
> usr/hadoop/share/hadoop/common/*:/usr/hadoop/share/
> hadoop/hdfs:/usr/hadoop/share/hadoop/hdfs/lib/*:/usr/hadoop/
> share/hadoop/hdfs/*:/usr/hadoop/share/hadoop/yarn/lib/*
> :/usr/hadoop/share/hadoop/yarn/*:/usr/hadoop/share/
> hadoop/mapreduce/lib/*:/usr/hadoop/share/hadoop/mapreduce/
> *:/contrib/capacity-scheduler/*.jar
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:SSH_CONNECTION=192.168.0.132 37633 192.168.0.80 22
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HOSTNAME=nn1
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HADOOP_COMMON_LIB_NATIVE_DIR=/usr/hadoop/lib/native
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:XDG_RUNTIME_DIR=/run/user/1002
> 2016-08-05 13:04:22,250 INFO  [main] util.ServerCommandLine:
> env:HBASE_THRIFT_OPTS=
> 2016-08-05 13:04:22,251 INFO  [main] util.ServerCommandLine:
> env:HBASE_HOME=/usr/hbase/bin/..
> 2016-08-05 13:04:22,251 INFO  [main] util.ServerCommandLine:
> env:HOME=/usr/hadoop
> 2016-08-05 13:04:22,251 INFO  [main] util.ServerCommandLine:
> env:MALLOC_ARENA_MAX=4
> 2016-08-05 13:04:22,252 INFO  [main] util.ServerCommandLine: vmName=Java
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation,
> vmVersion=25.92-b14
> 2016-08-05 13:04:22,252 INFO  [main] util.ServerCommandLine:
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p,
> -XX:+UseConcMarkSweepGC, -XX:PermSize=128m, -XX:MaxPermSize=128m,
> -Dhbase.log.dir=/usr/hbase/bin/../logs,
> -Dhbase.log.file=hbase-hadoop-master-nn1.log,
> -Dhbase.home.dir=/usr/hbase/bin/.., -Dhbase.id.str=hadoop,
> -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/usr/hadoop/lib,
> -Dhbase.security.logger=INFO,RFAS]
> 2016-08-05 13:04:22,803 INFO  [main] regionserver.RSRpcServices:
> master/nn1/
> 192.168.0.80:16000 server-side HConnection retries=350
> 2016-08-05 13:04:23,047 INFO  [main] ipc.SimpleRpcScheduler: Using deadline
> as user call queue, count=3
> 2016-08-05 13:04:23,067 INFO  [main] ipc.RpcServer: master/nn1/
> 192.168.0.80:16000: started 10 reader(s) listening on port=16000
> 2016-08-05 13:04:23,199 INFO  [main] impl.MetricsConfig: loaded properties
> from hadoop-metrics2-hbase.properties
> 2016-08-05 13:04:23,329 INFO  [main] impl.MetricsSystemImpl: Scheduled
> snapshot period at 10 second(s).
> 2016-08-05 13:04:23,329 INFO  [main] impl.MetricsSystemImpl: HBase metrics
> system started
> 2016-08-05 13:04:23,532 WARN  [main] util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
> 2016-08-05 13:04:24,328 INFO  [main] fs.HFileSystem: Added intercepting
> call to namenode#getBlockLocations so can do block reordering using class
> class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2016-08-05 13:04:24,726 INFO  [main] zookeeper.RecoverableZooKeeper:
> Process identifier=master:16000 connecting to ZooKeeper
> ensemble=nn1:2181,nn2:2181,dn1:2181
> 2016-08-05 13:04:24,747 INFO  [main] zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
> 2016-08-05 13:04:24,747 INFO  [main] zookeeper.ZooKeeper: Client
> environment:host.name=nn1
> 2016-08-05 13:04:24,747 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.version=1.8.0_92
> 2016-08-05 13:04:24,747 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.vendor=Oracle Corporation
> 2016-08-05 13:04:24,747 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.home=/usr/java/jdk1.8.0_92/jre
> 2016-08-05 13:04:24,747 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.class.path=/usr/hbase/bin/../conf:/usr/
> java/default//lib/tools.jar:/usr/hbase/bin/..:/usr/hbase/
> bin/../lib/activation-1.1.jar:/usr/hbase/bin/../lib/
> antisamy-1.4.3.jar:/usr/hbase/bin/../lib/aopalliance-1.0.
> jar:/usr/hbase/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/
> usr/hbase/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/
> usr/hbase/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hbase/
> bin/../lib/api-util-1.0.0-M20.jar:/usr/hbase/bin/../lib/asm-
> 3.1.jar:/usr/hbase/bin/../lib/avro-1.7.4.jar:/usr/hbase/bin/
> ../lib/batik-css-1.7.jar:/usr/hbase/bin/../lib/batik-ext-1.
> 7.jar:/usr/hbase/bin/../lib/batik-util-1.7.jar:/usr/hbase/
> bin/../lib/bsh-core-2.0b4.jar:/usr/hbase/bin/../lib/commons-
> beanutils-1.7.0.jar:/usr/hbase/bin/../lib/commons-
> beanutils-core-1.7.0.jar:/usr/hbase/bin/../lib/commons-cli-
> 1.2.jar:/usr/hbase/bin/../lib/commons-codec-1.9.jar:/usr/
> hbase/bin/../lib/commons-collections-3.2.2.jar:/usr/
> hbase/bin/../lib/commons-compress-1.4.1.jar:/usr/hbase/bin/../lib/commons-
> configuration-1.6.jar:/usr/hbase/bin/../lib/commons-
> daemon-1.0.13.jar:/usr/hbase/bin/../lib/commons-digester-1.
> 8.jar:/usr/hbase/bin/../lib/commons-el-1.0.jar:/usr/hbase/
> bin/../lib/commons-fileupload-1.2.jar:/usr/hbase/bin/../lib/
> commons-httpclient-3.1.jar:/usr/hbase/bin/../lib/commons-
> io-2.4.jar:/usr/hbase/bin/../lib/commons-lang-2.6.jar:/usr/
> hbase/bin/../lib/commons-logging-1.2.jar:/usr/hbase/
> bin/../lib/commons-math-2.2.jar:/usr/hbase/bin/../lib/
> commons-math3-3.1.1.jar:/usr/hbase/bin/../lib/commons-net-
> 3.1.jar:/usr/hbase/bin/../lib/disruptor-3.3.0.jar:/usr/
> hbase/bin/../lib/esapi-2.1.0.jar:/usr/hbase/bin/../lib/
> findbugs-annotations-1.3.9-1.jar:/usr/hbase/bin/../lib/
> guava-12.0.1.jar:/usr/hbase/bin/../lib/guice-3.0.jar:/usr/
> hbase/bin/../lib/guice-servlet-3.0.jar:/usr/hbase/
> bin/../lib/hadoop-annotations-2.5.1.jar:/usr/hbase/bin/../
> lib/hadoop-auth-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-
> client-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-common-2.5.
> 1.jar:/usr/hbase/bin/../lib/hadoop-hdfs-2.5.1.jar:/usr/
> hbase/bin/../lib/hadoop-mapreduce-client-app-2.5.1.
> jar:/usr/hbase/bin/../lib/hadoop-mapreduce-client-
> common-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-mapreduce-
> client-core-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-
> mapreduce-client-jobclient-2.5.1.jar:/usr/hbase/bin/../lib/
> hadoop-mapreduce-client-shuffle-2.5.1.jar:/usr/hbase/
> bin/../lib/hadoop-yarn-api-2.5.1.jar:/usr/hbase/bin/../lib/
> hadoop-yarn-client-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-
> yarn-common-2.5.1.jar:/usr/hbase/bin/../lib/hadoop-yarn-
> server-common-2.5.1.jar:/usr/hbase/bin/../lib/hbase-
> annotations-1.1.5.jar:/usr/hbase/bin/../lib/hbase-
> annotations-1.1.5-tests.jar:/usr/hbase/bin/../lib/hbase-
> client-1.1.5.jar:/usr/hbase/bin/../lib/hbase-common-1.1.5.
> jar:/usr/hbase/bin/../lib/hbase-common-1.1.5-tests.jar:/
> usr/hbase/bin/../lib/hbase-examples-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-hadoop2-compat-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-hadoop-compat-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-it-1.1.5.jar:/usr/hbase/bin/../lib/hbase-
> it-1.1.5-tests.jar:/usr/hbase/bin/../lib/hbase-prefix-tree-
> 1.1.5.jar:/usr/hbase/bin/../lib/hbase-procedure-1.1.5.jar:
> /usr/hbase/bin/../lib/hbase-protocol-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-resource-bundle-1.1.5.jar:/usr/hbase/
> bin/../lib/hbase-rest-1.1.5.jar:/usr/hbase/bin/../lib/
> hbase-server-1.1.5.jar:/usr/hbase/bin/../lib/hbase-server-
> 1.1.5-tests.jar:/usr/hbase/bin/../lib/hbase-shell-1.1.5.
> jar:/usr/hbase/bin/../lib/hbase-thrift-1.1.5.jar:/usr/
> hbase/bin/../lib/htrace-core-3.1.0-incubating.jar:/usr/
> hbase/bin/../lib/httpclient-4.2.5.jar:/usr/hbase/bin/../lib/
> httpcore-4.1.3.jar:/usr/hbase/bin/../lib/jackson-core-asl-1.
> 9.13.jar:/usr/hbase/bin/../lib/jackson-jaxrs-1.9.13.jar:/
> usr/hbase/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/
> hbase/bin/../lib/jackson-xc-1.9.13.jar:/usr/hbase/bin/../
> lib/jamon-runtime-2.3.1.jar:/usr/hbase/bin/../lib/jasper-
> compiler-5.5.23.jar:/usr/hbase/bin/../lib/jasper-
> runtime-5.5.23.jar:/usr/hbase/bin/../lib/javax.inject-1.jar:
> /usr/hbase/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hbase/
> bin/../lib/jaxb-api-2.2.2.jar:/usr/hbase/bin/../lib/jaxb-
> impl-2.2.3-1.jar:/usr/hbase/bin/../lib/jcodings-1.0.8.jar:
> /usr/hbase/bin/../lib/jersey-client-1.9.jar:/usr/hbase/bin/
> ../lib/jersey-core-1.9.jar:/usr/hbase/bin/../lib/jersey-
> guice-1.9.jar:/usr/hbase/bin/../lib/jersey-json-1.9.jar:/
> usr/hbase/bin/../lib/jersey-server-1.9.jar:/usr/hbase/bin/
> ../lib/jets3t-0.9.0.jar:/usr/hbase/bin/../lib/jettison-1.3.
> 3.jar:/usr/hbase/bin/../lib/jetty-6.1.26.jar:/usr/hbase/
> bin/../lib/jetty-sslengine-6.1.26.jar:/usr/hbase/bin/../
> lib/jetty-util-6.1.26.jar:/usr/hbase/bin/../lib/joni-2.1.
> 2.jar:/usr/hbase/bin/../lib/jruby-complete-1.6.8.jar:/usr/
> hbase/bin/../lib/jsch-0.1.42.jar:/usr/hbase/bin/../lib/jsp-
> 2.1-6.1.14.jar:/usr/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:
> /usr/hbase/bin/../lib/jsr305-1.3.9.jar:/usr/hbase/bin/../
> lib/junit-4.12.jar:/usr/hbase/bin/../lib/leveldbjni-all-1.8.
> jar:/usr/hbase/bin/../lib/libthrift-0.9.0.jar:/usr/
> hbase/bin/../lib/log4j-1.2.17.jar:/usr/hbase/bin/../lib/
> metrics-core-2.2.0.jar:/usr/hbase/bin/../lib/nekohtml-1.9.
> 12.jar:/usr/hbase/bin/../lib/netty-3.2.4.Final.jar:/usr/
> hbase/bin/../lib/netty-all-4.0.23.Final.jar:/usr/hbase/bin/
> ../lib/paranamer-2.3.jar:/usr/hbase/bin/../lib/protobuf-
> java-2.5.0.jar:/usr/hbase/bin/../lib/servlet-api-2.5-6.1.14.
> jar:/usr/hbase/bin/../lib/servlet-api-2.5.jar:/usr/
> hbase/bin/../lib/slf4j-api-1.7.7.jar:/usr/hbase/bin/../lib/
> slf4j-log4j12-1.7.5.jar:/usr/hbase/bin/../lib/snappy-java-
> 1.0.4.1.jar:/usr/hbase/bin/../lib/spymemcached-2.11.6.jar:/
> usr/hbase/bin/../lib/xalan-2.7.0.jar:/usr/hbase/bin/../lib/
> xml-apis-1.3.03.jar:/usr/hbase/bin/../lib/xml-apis-ext-
> 1.3.04.jar:/usr/hbase/bin/../lib/xmlenc-0.52.jar:/usr/
> hbase/bin/../lib/xom-1.2.5.jar:/usr/hbase/bin/../lib/xz-
> 1.0.jar:/usr/hbase/bin/../lib/zookeeper-3.4.6.jar:/usr/
> hadoop/etc/hadoop:/usr/hadoop/share/hadoop/common/lib/
> commons-math3-3.1.1.jar:/usr/hadoop/share/hadoop/common/
> lib/jetty-util-6.1.26.jar:/usr/hadoop/share/hadoop/
> common/lib/slf4j-log4j12-1.7.10.jar:/usr/hadoop/share/
> hadoop/common/lib/netty-3.6.2.Final.jar:/usr/hadoop/share/
> hadoop/common/lib/commons-compress-1.4.1.jar:/usr/
> hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/
> usr/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/
> usr/hadoop/share/hadoop/common/lib/commons-beanutils-
> core-1.8.0.jar:/usr/hadoop/share/hadoop/common/lib/guava-
> 11.0.2.jar:/usr/hadoop/share/hadoop/common/lib/commons-
> digester-1.8.jar:/usr/hadoop/share/hadoop/common/lib/jetty-
> 6.1.26.jar:/usr/hadoop/share/hadoop/common/lib/hamcrest-
> core-1.3.jar:/usr/hadoop/share/hadoop/common/lib/
> hadoop-auth-2.7.2.jar:/usr/hadoop/share/hadoop/common/
> lib/zookeeper-3.4.6.jar:/usr/hadoop/share/hadoop/common/
> lib/mockito-all-1.8.5.jar:/usr/hadoop/share/hadoop/
> common/lib/log4j-1.2.17.jar:/usr/hadoop/share/hadoop/
> common/lib/hadoop-annotations-2.7.2.jar:/usr/hadoop/share/
> hadoop/common/lib/jsr305-3.0.0.jar:/usr/hadoop/share/
> hadoop/common/lib/httpclient-4.2.5.jar:/usr/hadoop/share/
> hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/hadoop/share/hadoop/common/
> lib/asm-3.2.jar:/usr/hadoop/share/hadoop/common/lib/
> curator-framework-2.7.1.jar:/usr/hadoop/share/hadoop/
> common/lib/api-util-1.0.0-M20.jar:/usr/hadoop/share/hadoop/
> common/lib/jettison-1.1.jar:/usr/hadoop/share/hadoop/
> common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/
> hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/
> usr/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.
> jar:/usr/hadoop/share/hadoop/common/lib/curator-recipes-2.
> 7.1.jar:/usr/hadoop/share/hadoop/common/lib/curator-
> client-2.7.1.jar:/usr/hadoop/share/hadoop/common/lib/slf4j-
> api-1.7.10.jar:/usr/hadoop/share/hadoop/common/lib/
> jersey-core-1.9.jar:/usr/hadoop/share/hadoop/common/
> lib/jersey-server-1.9.jar:/usr/hadoop/share/hadoop/
> common/lib/jersey-json-1.9.jar:/usr/hadoop/share/hadoop/
> common/lib/httpcore-4.2.5.jar:/usr/hadoop/share/hadoop/
> common/lib/paranamer-2.3.jar:/usr/hadoop/share/hadoop/
> common/lib/htrace-core-3.1.0-incubating.jar:/usr/hadoop/
> share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/hadoop/
> share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/
> usr/hadoop/share/hadoop/common/lib/commons-lang-2.6.
> jar:/usr/hadoop/share/hadoop/common/lib/commons-net-3.1.
> jar:/usr/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:
> /usr/hadoop/share/hadoop/common/lib/activation-1.1.jar:
> /usr/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.
> 13.jar:/usr/hadoop/share/hadoop/common/lib/stax-api-1.
> 0-2.jar:/usr/hadoop/share/hadoop/common/lib/commons-
> collections-3.2.2.jar:/usr/hadoop/share/hadoop/common/
> lib/xmlenc-0.52.jar:/usr/hadoop/share/hadoop/common/
> lib/commons-io-2.4.jar:/usr/hadoop/share/hadoop/common/
> lib/apacheds-i18n-2.0.0-M15.jar:/usr/hadoop/share/hadoop/
> common/lib/commons-cli-1.2.jar:/usr/hadoop/share/hadoop/
> common/lib/jackson-core-asl-1.9.13.jar:/usr/hadoop/share/
> hadoop/common/lib/commons-configuration-1.6.jar:/usr/
> hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/
> hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:
> /usr/hadoop/share/hadoop/common/lib/jackson-mapper-asl-
> 1.9.13.jar:/usr/hadoop/share/hadoop/common/lib/jaxb-impl-2.
> 2.3-1.jar:/usr/hadoop/share/hadoop/common/lib/avro-1.7.4.
> jar:/usr/hadoop/share/hadoop/common/lib/commons-httpclient-
> 3.1.jar:/usr/hadoop/share/hadoop/common/lib/xz-1.0.jar:/
> usr/hadoop/share/hadoop/common/lib/junit-4.11.jar:/
> usr/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-
> M20.jar:/usr/hadoop/share/hadoop/common/lib/servlet-api-
> 2.5.jar:/usr/hadoop/share/hadoop/common/lib/jsp-api-2.1.
> jar:/usr/hadoop/share/hadoop/common/lib/protobuf-java-2.5.
> 0.jar:/usr/hadoop/share/hadoop/common/hadoop-common-2.
> 7.2-tests.jar:/usr/hadoop/share/hadoop/common/hadoop-
> common-2.7.2.jar:/usr/hadoop/share/hadoop/common/hadoop-
> nfs-2.7.2.jar:/usr/hadoop/share/hadoop/hdfs:/usr/hadoop/
> share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/hadoop/
> share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/hadoop/
> share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/hadoop/share/
> hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/hadoop/share/hadoop/
> hdfs/lib/log4j-1.2.17.jar:/usr/hadoop/share/hadoop/hdfs/
> lib/jsr305-3.0.0.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> asm-3.2.jar:/usr/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.
> 23.Final.jar:/usr/hadoop/share/hadoop/hdfs/lib/commons-
> codec-1.4.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> leveldbjni-all-1.8.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> jersey-core-1.9.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> jersey-server-1.9.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> htrace-core-3.1.0-incubating.jar:/usr/hadoop/share/hadoop/
> hdfs/lib/xml-apis-1.3.04.jar:/usr/hadoop/share/hadoop/hdfs/
> lib/commons-lang-2.6.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> xmlenc-0.52.jar:/usr/hadoop/share/hadoop/hdfs/lib/commons-
> io-2.4.jar:/usr/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.
> 9.1.jar:/usr/hadoop/share/hadoop/hdfs/lib/commons-cli-1.
> 2.jar:/usr/hadoop/share/hadoop/hdfs/lib/jackson-core-
> asl-1.9.13.jar:/usr/hadoop/share/hadoop/hdfs/lib/commons-
> logging-1.1.3.jar:/usr/hadoop/share/hadoop/hdfs/lib/jackson-
> mapper-asl-1.9.13.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> commons-daemon-1.0.13.jar:/usr/hadoop/share/hadoop/hdfs/
> lib/servlet-api-2.5.jar:/usr/hadoop/share/hadoop/hdfs/lib/
> protobuf-java-2.5.0.jar:/usr/hadoop/share/hadoop/hdfs/
> hadoop-hdfs-nfs-2.7.2.jar:/usr/hadoop/share/hadoop/hdfs/
> hadoop-hdfs-2.7.2-tests.jar:/usr/hadoop/share/hadoop/hdfs/
> hadoop-hdfs-2.7.2.jar:/usr/hadoop/share/hadoop/yarn/lib/
> jersey-guice-1.9.jar:/usr/hadoop/share/hadoop/yarn/lib/
> jetty-util-6.1.26.jar:/usr/hadoop/share/hadoop/yarn/lib/
> zookeeper-3.4.6-tests.jar:/usr/hadoop/share/hadoop/yarn/
> lib/netty-3.6.2.Final.jar:/usr/hadoop/share/hadoop/yarn/
> lib/commons-compress-1.4.1.jar:/usr/hadoop/share/hadoop/
> yarn/lib/guava-11.0.2.jar:/usr/hadoop/share/hadoop/yarn/
> lib/jetty-6.1.26.jar:/usr/hadoop/share/hadoop/yarn/lib/
> guice-servlet-3.0.jar:/usr/hadoop/share/hadoop/yarn/lib/
> zookeeper-3.4.6.jar:/usr/hadoop/share/hadoop/yarn/lib/
> log4j-1.2.17.jar:/usr/hadoop/share/hadoop/yarn/lib/javax.
> inject-1.jar:/usr/hadoop/share/hadoop/yarn/lib/jsr305-
> 3.0.0.jar:/usr/hadoop/share/hadoop/yarn/lib/aopalliance-1.
> 0.jar:/usr/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/
> usr/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/
> hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/
> hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/
> hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/
> hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/
> hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/
> hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:
> /usr/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/
> usr/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/
> hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/
> hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/
> usr/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:
> /usr/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.
> jar:/usr/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.
> jar:/usr/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/
> hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/
> hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/hadoop/
> share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/
> hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.
> 2.jar:/usr/hadoop/share/hadoop/yarn/hadoop-yarn-
> server-web-proxy-2.7.2.jar:/usr/hadoop/share/hadoop/yarn/
> hadoop-yarn-registry-2.7.2.jar:/usr/hadoop/share/hadoop/
> yarn/hadoop-yarn-api-2.7.2.jar:/usr/hadoop/share/hadoop/
> yarn/hadoop-yarn-server-common-2.7.2.jar:/usr/hadoop/
> share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-
> launcher-2.7.2.jar:/usr/hadoop/share/hadoop/yarn/
> hadoop-yarn-server-tests-2.7.2.jar:/usr/hadoop/share/
> hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.
> 2.jar:/usr/hadoop/share/hadoop/yarn/hadoop-yarn-
> server-nodemanager-2.7.2.jar:/usr/hadoop/share/hadoop/yarn/
> hadoop-yarn-common-2.7.2.jar:/usr/hadoop/share/hadoop/yarn/
> hadoop-yarn-applications-distributedshell-2.7.2.jar:/
> usr/hadoop/share/hadoop/yarn/hadoop-yarn-server-
> resourcemanager-2.7.2.jar:/usr/hadoop/share/hadoop/yarn/
> hadoop-yarn-client-2.7.2.jar:/usr/hadoop/share/hadoop/
> mapreduce/lib/jersey-guice-1.9.jar:/usr/hadoop/share/
> hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/hadoop/
> share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/
> usr/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.
> 4.1.jar:/usr/hadoop/share/hadoop/mapreduce/lib/hamcrest-
> core-1.3.jar:/usr/hadoop/share/hadoop/mapreduce/lib/
> guice-servlet-3.0.jar:/usr/hadoop/share/hadoop/mapreduce/
> lib/log4j-1.2.17.jar:/usr/hadoop/share/hadoop/mapreduce/
> lib/hadoop-annotations-2.7.2.jar:/usr/hadoop/share/hadoop/
> mapreduce/lib/javax.inject-1.jar:/usr/hadoop/share/hadoop/
> mapreduce/lib/aopalliance-1.0.jar:/usr/hadoop/share/hadoop/
> mapreduce/lib/asm-3.2.jar:/usr/hadoop/share/hadoop/
> mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hadoop/share/
> hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/hadoop/
> share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/
> hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/
> hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/
> hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.
> jar:/usr/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-
> asl-1.9.13.jar:/usr/hadoop/share/hadoop/mapreduce/lib/
> avro-1.7.4.jar:/usr/hadoop/share/hadoop/mapreduce/lib/xz-
> 1.0.jar:/usr/hadoop/share/hadoop/mapreduce/lib/junit-4.
> 11.jar:/usr/hadoop/share/hadoop/mapreduce/lib/guice-3.
> 0.jar:/usr/hadoop/share/hadoop/mapreduce/lib/protobuf-
> java-2.5.0.jar:/usr/hadoop/share/hadoop/mapreduce/hadoop-
> mapreduce-client-app-2.7.2.jar:/usr/hadoop/share/hadoop/
> mapreduce/hadoop-mapreduce-client-hs-2.7.2.jar:/usr/
> hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-
> jobclient-2.7.2-tests.jar:/usr/hadoop/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-2.7.2.jar:/
> usr/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-
> client-shuffle-2.7.2.jar:/usr/hadoop/share/hadoop/mapreduce/
> hadoop-mapreduce-examples-2.7.2.jar:/usr/hadoop/share/
> hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.2.
> jar:/usr/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-
> client-hs-plugins-2.7.2.jar:/usr/hadoop/share/hadoop/
> mapreduce/hadoop-mapreduce-client-core-2.7.2.jar:/
> contrib/capacity-scheduler/*.jar
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.library.path=/usr/hadoop/lib
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:os.version=3.10.0-327.4.5.el7.x86_64
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:user.name=hadoop
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:user.home=/usr/hadoop
> 2016-08-05 13:04:24,748 INFO  [main] zookeeper.ZooKeeper: Client
> environment:user.dir=/usr/hadoop
> 2016-08-05 13:04:24,749 INFO  [main] zookeeper.ZooKeeper: Initiating client
> connection, connectString=nn1:2181,nn2:2181,dn1:2181 sessionTimeout=90000
> watcher=master:160000x0, quorum=nn1:2181,nn2:2181,dn1:2181,
> baseZNode=/hbase
> 2016-08-05 13:04:24,786 INFO  [main-SendThread(nn2:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server nn2/
> 192.168.0.81:2181. Will not attempt to authenticate using SASL (unknown
> error)
> 2016-08-05 13:04:24,803 INFO  [main-SendThread(nn2:2181)]
> zookeeper.ClientCnxn: Socket connection established to nn2/
> 192.168.0.81:2181,
> initiating session
> 2016-08-05 13:04:25,052 INFO  [main-SendThread(nn2:2181)]
> zookeeper.ClientCnxn: Session establishment complete on server nn2/
> 192.168.0.81:2181, sessionid = 0x2565a26b8a70000, negotiated timeout =
> 40000
> 2016-08-05 13:04:25,423 INFO  [RpcServer.responder] ipc.RpcServer:
> RpcServer.responder: starting
> 2016-08-05 13:04:25,443 INFO  [RpcServer.listener,port=16000]
> ipc.RpcServer: RpcServer.listener,port=16000: starting
> 2016-08-05 13:04:25,608 INFO  [main] mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2016-08-05 13:04:25,611 INFO  [main] http.HttpRequestLog: Http request log
> for http.requests.master is not defined
> 2016-08-05 13:04:25,621 INFO  [main] http.HttpServer: Added global filter
> 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$
> QuotingInputFilter)
> 2016-08-05 13:04:25,622 INFO  [main] http.HttpServer: Added filter
> static_user_filter
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$
> StaticUserFilter)
> to context master
> 2016-08-05 13:04:25,622 INFO  [main] http.HttpServer: Added filter
> static_user_filter
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$
> StaticUserFilter)
> to context static
> 2016-08-05 13:04:25,622 INFO  [main] http.HttpServer: Added filter
> static_user_filter
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$
> StaticUserFilter)
> to context logs
> 2016-08-05 13:04:25,661 INFO  [main] http.HttpServer: Jetty bound to port
> 16010
> 2016-08-05 13:04:25,661 INFO  [main] mortbay.log: jetty-6.1.26
> 2016-08-05 13:04:26,237 INFO  [main] mortbay.log: Started
> SelectChannelConnector@0.0.0.0:16010
> 2016-08-05 13:04:26,240 INFO  [main] master.HMaster:
> hbase.rootdir=hdfs://mycluster/hbase, hbase.cluster.distributed=true
> 2016-08-05 13:04:26,274 INFO  [main] master.HMaster: Adding backup master
> ZNode /hbase/backup-masters/nn1,16000,1470391463361
> 2016-08-05 13:04:26,708 INFO  [master/nn1/192.168.0.80:16000]
> zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x67159c45
> connecting to ZooKeeper ensemble=nn1:2181,nn2:2181,dn1:2181
> 2016-08-05 13:04:26,708 INFO  [master/nn1/192.168.0.80:16000]
> zookeeper.ZooKeeper: Initiating client connection,
> connectString=nn1:2181,nn2:2181,dn1:2181 sessionTimeout=90000
> watcher=hconnection-0x67159c450x0, quorum=nn1:2181,nn2:2181,dn1:2181,
> baseZNode=/hbase
> 2016-08-05 13:04:26,737 INFO
> [master/nn1/192.168.0.80:16000-SendThread(dn1:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server dn1/
> 192.168.0.82:2181. Will not attempt to authenticate using SASL (unknown
> error)
> 2016-08-05 13:04:26,738 INFO  [nn1:16000.activeMasterManager]
> master.ActiveMasterManager: Deleting ZNode for
> /hbase/backup-masters/nn1,16000,1470391463361 from backup master directory
> 2016-08-05 13:04:26,753 INFO
> [master/nn1/192.168.0.80:16000-SendThread(dn1:2181)]
> zookeeper.ClientCnxn: Socket connection established to dn1/
> 192.168.0.82:2181,
> initiating session
> 2016-08-05 13:04:26,756 INFO  [nn1:16000.activeMasterManager]
> master.ActiveMasterManager: Registered Active
> Master=nn1,16000,1470391463361
> 2016-08-05 13:04:26,770 INFO
> [master/nn1/192.168.0.80:16000-SendThread(dn1:2181)]
> zookeeper.ClientCnxn: Session establishment complete on server dn1/
> 192.168.0.82:2181, sessionid = 0x3565a007087000c, negotiated timeout =
> 40000
> 2016-08-05 13:04:26,802 INFO  [master/nn1/192.168.0.80:16000]
> client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
> 2016-08-05 13:04:27,284 WARN  [Thread-68] hdfs.DFSClient: DataStreamer
> Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 0 datanode(s) running and no node(s) are
> excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.
> chooseTarget4NewBlock(BlockManager.java:1547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
> FSNamesystem.java:3107)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
> FSNamesystem.java:3031)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> addBlock(NameNodeRpcServer.java:724)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.addBlock(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:492)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>         at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(
> DFSOutputStream.java:1449)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(
> DFSOutputStream.java:1270)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:526)
> 2016-08-05 13:04:37,332 WARN  [Thread-71] hdfs.DFSClient: DataStreamer
> Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 0 datanode(s) running and no node(s) are
> excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.
> chooseTarget4NewBlock(BlockManager.java:1547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
> FSNamesystem.java:3107)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
> FSNamesystem.java:3031)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> addBlock(NameNodeRpcServer.java:724)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.addBlock(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:492)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>         at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(
> DFSOutputStream.java:1449)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(
> DFSOutputStream.java:1270)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:526)
> 2016-08-05 13:04:47,370 WARN  [Thread-73] hdfs.DFSClient: DataStreamer
> Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 0 datanode(s) running and no node(s) are
> excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.
> chooseTarget4NewBlock(BlockManager.java:1547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
> FSNamesystem.java:3107)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
> FSNamesystem.java:3031)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> addBlock(NameNodeRpcServer.java:724)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.addBlock(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:492)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>         at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(
> DFSOutputStream.java:1449)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(
> DFSOutputStream.java:1270)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:526)
> 2016-08-05 13:04:57,406 WARN  [Thread-74] hdfs.DFSClient: DataStreamer
> Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 0 datanode(s) running and no node(s) are
> excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.
> chooseTarget4NewBlock(BlockManager.java:1547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
> FSNamesystem.java:3107)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
> FSNamesystem.java:3031)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> addBlock(NameNodeRpcServer.java:724)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.addBlock(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:492)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>         at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(
> DFSOutputStream.java:1449)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(
> DFSOutputStream.java:1270)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:526)
> 2016-08-05 13:04:57,412 FATAL [nn1:16000.activeMasterManager]
> master.HMaster: Failed to become active master
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 0 datanode(s) running and no node(s) are
> excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.
> chooseTarget4NewBlock(BlockManager.java:1547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
> FSNamesystem.java:3107)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
> FSNamesystem.java:3031)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> addBlock(NameNodeRpcServer.java:724)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.addBlock(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:492)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>         at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(
> DFSOutputStream.java:1449)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(
> DFSOutputStream.java:1270)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:526)
> 2016-08-05 13:04:57,412 FATAL [nn1:16000.activeMasterManager]
> master.HMaster: Unhandled exception. Starting shutdown.
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 0 datanode(s) running and no node(s) are
> excluded in this operation.
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.
> chooseTarget4NewBlock(BlockManager.java:1547)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
> FSNamesystem.java:3107)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
> FSNamesystem.java:3031)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> addBlock(NameNodeRpcServer.java:724)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.addBlock(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:492)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslat
> orPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
>         at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(
> DFSOutputStream.java:1449)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(
> DFSOutputStream.java:1270)
>         at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:526)
> 2016-08-05 13:04:57,412 INFO  [nn1:16000.activeMasterManager]
> regionserver.HRegionServer: STOPPED: Unhandled exception. Starting
> shutdown.
> 2016-08-05 13:05:00,070 INFO  [master/nn1/192.168.0.80:16000]
> ipc.RpcServer: Stopping server on 16000
> 2016-08-05 13:05:00,070 INFO  [RpcServer.listener,port=16000]
> ipc.RpcServer: RpcServer.listener,port=16000: stopping
> 2016-08-05 13:05:00,071 INFO  [master/nn1/192.168.0.80:16000]
> regionserver.HRegionServer: Stopping infoServer
> 2016-08-05 13:05:00,084 INFO  [RpcServer.responder] ipc.RpcServer:
> RpcServer.responder: stopped
> 2016-08-05 13:05:00,085 INFO  [RpcServer.responder] ipc.RpcServer:
> RpcServer.responder: stopping
> 2016-08-05 13:05:00,132 INFO  [master/nn1/192.168.0.80:16000] mortbay.log:
> Stopped SelectChannelConnector@0.0.0.0:16010
> 2016-08-05 13:05:00,136 INFO  [master/nn1/192.168.0.80:16000]
> regionserver.HRegionServer: stopping server nn1,16000,1470391463361
> 2016-08-05 13:05:00,136 INFO  [master/nn1/192.168.0.80:16000]
> client.ConnectionManager$HConnectionImplementation: Closing zookeeper
> sessionid=0x3565a007087000c
> 2016-08-05 13:05:00,140 INFO  [master/nn1/192.168.0.80:16000]
> zookeeper.ZooKeeper: Session: 0x3565a007087000c closed
> 2016-08-05 13:05:00,140 INFO  [master/nn1/192.168.0.80:16000-EventThread]
> zookeeper.ClientCnxn: EventThread shut down
> 2016-08-05 13:05:00,140 INFO  [master/nn1/192.168.0.80:16000]
> regionserver.HRegionServer: stopping server nn1,16000,1470391463361; all
> regions closed.
> 2016-08-05 13:05:00,141 INFO  [master/nn1/192.168.0.80:16000]
> hbase.ChoreService: Chore service for: nn1,16000,1470391463361 had [] on
> shutdown
> 2016-08-05 13:05:00,148 INFO  [master/nn1/192.168.0.80:16000]
> ipc.RpcServer: Stopping server on 16000
> 2016-08-05 13:05:00,158 INFO  [master/nn1/192.168.0.80:16000]
> zookeeper.RecoverableZooKeeper: Node /hbase/rs/nn1,16000,1470391463361
> already deleted, retry=false
> 2016-08-05 13:05:00,170 INFO  [master/nn1/192.168.0.80:16000]
> zookeeper.ZooKeeper: Session: 0x2565a26b8a70000 closed
> 2016-08-05 13:05:00,170 INFO  [master/nn1/192.168.0.80:16000]
> regionserver.HRegionServer: stopping server nn1,16000,1470391463361;
> zookeeper connection closed.
> 2016-08-05 13:05:00,170 INFO  [master/nn1/192.168.0.80:16000]
> regionserver.HRegionServer: master/nn1/192.168.0.80:16000 exiting
> 2016-08-05 13:05:00,172 WARN  [main-EventThread]
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
> quorum=nn1:2181,nn2:2181,dn1:2181,
> exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for /hbase/master
>
> Best regards,
> Alexandr
>
> On Thu, Aug 4, 2016 at 11:04 PM, Dima Spivak <dspi...@cloudera.com
> <javascript:;>> wrote:
>
> > Hey Alexandr,
> >
> > In that case, you'd use what you have set in your hdfs-site.xml for
> > the dfs.nameservices property (followed by the HBase directory under
> HDFS).
> >
> > -Dima
> >
> > On Thu, Aug 4, 2016 at 12:54 PM, Alexandr Porunov <
> > alexandr.poru...@gmail.com <javascript:;>> wrote:
> >
> > > Hello,
> > >
> > > I don't understand one parameter from hbase-site.xml :
> > >
> > > <property>
> > >   <name>hbase.rootdir</name>
> > >   <value>hdfs://hdfsHost:8020/hbase</value>
> > > </property>
> > >
> > > What we have to put in that parameter if we configured HDFS cluster in
> HA
> > > mode? I mean we have 2 name nodes (nn1, nn2) and 2 data nodes (dn1,
> dn2)
> > > then which node we have to use in "hbase.rootdir" parameter?
> > >
> > > The most logical answer is the name node which is currently active. But
> > if
> > > we will use active name node and it fails then hbase cluster becomes
> > > unavailable even if our nn2 will change its status to active. Hbase
> > cluster
> > > will not understand that we have changed our active NN.
> > >
> > > Moreover, I have configured HBase cluster with the following parameter:
> > >
> > > <property>
> > >   <name>hbase.rootdir</name>
> > >   <value>hdfs://nn1:8020/hbase</value>
> > > </property>
> > >
> > > It doesn't work.
> > > 1. HMaster starts
> > > 2. I put "http://nn1:16010"; into browser
> > > 3. HMaster disappears
> > >
> > > Here is my logs/hbase-hadoop-master-nn1.log :
> > > http://paste.openstack.org/show/549232/
> > >
> > > Please, help me to find out how to configure it
> > >
> > > Sincerely,
> > >
> > > Alexandr
> > >
> >
> >
> >
> > --
> > -Dima
> >
>


-- 
-Dima

Reply via email to