It looks like / is owned by hadoop.supergroup and the perms are 755. You could
precreate /accumulo and chown it appropriately, or set the perms for / to 775.
Init is trying to create /accumulo in hdfs as the accumulo user and your perms
dont allow it.
Do you have instance.volumes set in
I believe therr was an issue fixed in 2.5 or 2.6 where the standby NN would not
process block reports from the DNs when it was dealing with the checkpoint
process. The missing blocks will get reported eventually.
div Original message /divdivFrom: Chen Song
I'm having an issue in client code where there are multiple clusters with HA
namenodes involved. Example setup using Hadoop 2.3.0:
Cluster A with the following properties defined in core, hdfs, etc:
dfs.nameservices=clusterA
dfs.ha.namenodes.clusterA=nn1,nn2
Hi Roger,
I wrote the HDFS provider for Commons VFS. I went back and looked at the
source and tests, and I don't see anything wrong with what you are doing. I did
develop it against Hadoop 1.1.2 at the time, so there might be an issue that is
not accounted for with Hadoop 2. It was also not
Also, make sure that the jars on the classpath actually contain the HDFS file
system. I'm looking at:
No FileSystem for scheme: hdfs
which is an indicator for this condition.
Dave
From: dlmar...@hotmail.com
To: user@hadoop.apache.org
Subject: RE: Which Hadoop 2.x .jars are necessary for
I think I found the issue. The ZKFC on the standby NN server tried, and failed,
to connect to the standby NN when I shutdown the network on the Active NN
server. I'm getting an exception from the HealthMonitor in the ZKFC log:
WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception
Found this:
http://grokbase.com/t/cloudera/cdh-user/12anhyr8ht/cdh4-failover-controllers
Then configured dfs.ha.fencing.methods to contain both sshfence and
shell(/bin/true). Note that the docs for core-default.xml say that the value is
a list. I tried a comma with no luck. Had to look in the