Hi all:Recently, I updated my hadoop cluster from hadoop-2.0.0-cdh4.3.0 to
hadoop-2.5.0-cdh5.2.0. It works fine, however, a small problem is, when i use
the hadoop fs -ls command in the terminal to get the list of files in the hdfs,
it took much more time(10+ sec) to get the response compared to 2-3 secs before
i update the version of hadoop.( get is slow too)Can any one exaplain a little
bit of what might cause the problem or some configuration goes wrong?Below is
the log:
15/04/08 10:51:18 DEBUG util.Shell: setsid exited with exit code 015/04/08
10:51:18 DEBUG conf.Configuration: parsing URL
jar:file:/data/dbcenter/cdh5/hadoop-2.5.0-cdh5.2.0/share/hadoop/common/hadoop-common-2.5.0-cdh5.2.0.jar!/core-default.xml15/04/08
10:51:18 DEBUG conf.Configuration: parsing input stream
sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@57316e8515/04/08
10:51:18 DEBUG conf.Configuration: parsing URL
file:/data/dbcenter/cdh5/hadoop-2.5.0-cdh5.2.0/etc/hadoop/core-site.xml15/04/08
10:51:18 DEBUG conf.Configuration: parsing input stream
java.io.BufferedInputStream@31818dbc15/04/08 10:51:19 DEBUG
lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with
annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
value=[Rate of successful kerberos logins and latency (milliseconds)], about=,
type=DEFAULT, always=false, sampleName=Ops)15/04/08 10:51:19 DEBUG
lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with
annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
value=[Rate of failed kerberos logins and latency (milliseconds)], about=,
type=DEFAULT, always=false, sampleName=Ops)15/04/08 10:51:19 DEBUG
lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with
annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
value=[GetGroups], about=, type=DEFAULT, always=false, sampleName=Ops)15/04/08
10:51:19 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related
metrics15/04/08 10:51:19 DEBUG security.Groups: Creating new Groups
object15/04/08 10:51:19 DEBUG util.NativeCodeLoader: Trying to load the
custom-built native-hadoop library...15/04/08 10:51:19 DEBUG
util.NativeCodeLoader: Failed to load native-hadoop with error:
java.lang.UnsatisfiedLinkError: no hadoop in java.library.path15/04/08 10:51:19
DEBUG util.NativeCodeLoader:
java.library.path=/data/dbcenter/cdh5/hadoop-2.5.0-cdh5.2.0/lib/native15/04/08
10:51:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform... using builtin-java classes where applicable15/04/08 10:51:19
DEBUG util.PerformanceAdvisory: Falling back to shell based15/04/08 10:51:19
DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping15/04/08 10:51:19
DEBUG security.Groups: Group mapping
impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
cacheTimeout=300000; warningDeltaMs=500015/04/08 10:51:19 DEBUG
security.UserGroupInformation: hadoop login15/04/08 10:51:19 DEBUG
security.UserGroupInformation: hadoop login commit15/04/08 10:51:19 DEBUG
security.UserGroupInformation: using local user:UnixPrincipal: test15/04/08
10:51:19 DEBUG security.UserGroupInformation: UGI loginUser:test
(auth:SIMPLE)15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal:
dfs.client.use.legacy.blockreader.local = false15/04/08 10:51:19 DEBUG
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false15/04/08 10:51:19
DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic =
false15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
15/04/08 10:51:19 DEBUG hdfs.DFSClient: No KeyProvider found.15/04/08 10:51:19
DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI
hdfs://tccluster:802015/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal:
dfs.client.use.legacy.blockreader.local = false15/04/08 10:51:19 DEBUG
hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false15/04/08 10:51:19
DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic =
false15/04/08 10:51:19 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
15/04/08 10:51:19 DEBUG retry.RetryUtils: multipleLinearRandomRetry =
null15/04/08 10:51:19 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER,
rpcRequestWrapperClass=class
org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper,
rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@5a2611a615/04/08
10:51:19 DEBUG ipc.Client: getting client out of cache:
org.apache.hadoop.ipc.Client@285d4a6a15/04/08 10:51:30 DEBUG
util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket
are disabled.15/04/08 10:51:30 DEBUG sasl.DataTransferSaslUtil:
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in
configuration for dfs.data.transfer.protection15/04/08 10:51:30 DEBUG
ipc.Client: The ping interval is 60000 ms.15/04/08 10:51:30 DEBUG ipc.Client:
Connecting to master/192.168.1.13:802015/04/08 10:51:30 DEBUG ipc.Client: IPC
Client (246890776) connection to master/192.168.1.13:8020 from test: starting,
having connections 115/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776)
connection to master/192.168.1.13:8020 from test sending #015/04/08 10:51:30
DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test got value #015/04/08 10:51:30 DEBUG ipc.ProtobufRpcEngine: Call:
getFileInfo took 88ms15/04/08 10:51:30 DEBUG ipc.Client: IPC Client (246890776)
connection to master/192.168.1.13:8020 from test sending #115/04/08 10:51:30
DEBUG ipc.Client: IPC Client (246890776) connection to master/192.168.1.13:8020
from test got value #115/04/08 10:51:30 DEBUG ipc.ProtobufRpcEngine: Call:
getListing took 2ms15/04/08 10:51:30 DEBUG ipc.Client: stopping client from
cache: org.apache.hadoop.ipc.Client@285d4a6a15/04/08 10:51:30 DEBUG ipc.Client:
removing client from cache: org.apache.hadoop.ipc.Client@285d4a6a15/04/08
10:51:30 DEBUG ipc.Client: stopping actual client because no more references
remain: org.apache.hadoop.ipc.Client@285d4a6a15/04/08 10:51:30 DEBUG
ipc.Client: Stopping client15/04/08 10:51:30 DEBUG ipc.Client: IPC Client
(246890776) connection to master/192.168.1.13:8020 from test: closed15/04/08
10:51:30 DEBUG ipc.Client: IPC Client (246890776) connection to
master/192.168.1.13:8020 from test: stopped, remaining connections 0