Hi all,
I think there is an issue in cooperation of HftpFileSystem (hftp://) and
HDFS High Availability. The read might fail in the following scenario:
* a cluster is configured in HA mode, with the following configuration:
property
namedfs.nameservices/name
valuemaster/value
Hi,
Does the HDFS FI framework still work in trunk code?
This is the doc about HDFS FI.
http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/FaultInjectFramework.html
But looks like the doc is too old.
I grep the code in the trunk. I can not find any AspectJ plugin in maven
pom.
Thanks Adaryl,
I’m currently looking at Tom White p298, published May 2012, which
references a 2010 spec. Both Tom and Eric's books where published in 2012
so the information in both will be a tad dated no doubt.
What I need to know is the current:
Processor average spec
Memory spec
Disk
Hi,
I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying
automatic failover, after killing the process of namenode from Active one, the
name node was not failover to standby node,
Please advise
Regards
Arthur
2014-08-04 18:54:40,453 WARN
These indicates some lib versions conflicts - UnsupportedOperationException:
setXIncludeAware is not supported on this JAXP implementation or earlier: class
gnu.xml.dom.JAXPFactory
That classe is in gnujaxp jar. This chart api probably brought different
version for this lib, from the version
Hi,
Unfortunately, after I set my user¹s ulimit n to 65536, I still get the
same bad performance, killed containers and errors as before.
I collected together a bunch of logs around the moment when the containers
are being killed (application master log, killed container log, hadoop-hdfs
logs,
Hi,
Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?
Similar to Namenode status web page, a Cluster Web Console is added in
federation to monitor the federated cluster at
We are at 11GB for yarn nodemanager.resource.memory-mb
It seems that the problem is due to the number of CPUs.
Each Spark executor needed too many CPUs in comparaison to available CPUs.
In consequence the Fair Scheduler didn't allow all the available memory
because all CPUs where all-ready used.
ZKFC LOG:
By Default , it will be under HADOOP_HOME/logs/hadoop_**zkfc.log
Same can be confirmed by using the following commands(to get the log location)
jinfo 7370 | grep -i hadoop.log.dir
ps -eaf | grep -i DFSZKFailoverController | grep -i hadoop.log.dir
WEB Console :
And Default port
The mapper and reducer numbers really depends on what your program is trying to
do. Without your actual query it’s really difficult to tell why you are having
this problem.
For example, if you tried to perform a global sum or count, cascalog will only
use one reducer since this is the only way
The contents are
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
check the contents of
Thanks a lot for your explanation Felix .
MY query is not using global sort/count. But still i am unable to understand -
even i set the mapped.reduce.tasks=4
when the hadoop job runs i still see
14/08/03 15:01:48 INFO mapred.MapTask: numReduceTasks: 1
14/08/03 15:01:48 INFO mapred.MapTask:
You have not given namenode uri in /etc/hosts file , thus it can't
resolve it to ipaddress and your namenode would also be not started.
Preferable practice is to start your cluster through start-dfs.sh
command, it implicitly starts first namenode and then all its datanodes.
Also make sure you
13 matches
Mail list logo