If you plan to check with nagios I'm working in some scripts (Early
development stage) https://github.com/jucaf/nagios-hadoop and for ganglia
most of hadoop component sends metrics to ganglia by configuration, but
some pieces doesn't do it, I have some scripts to send metrics from oozie,
and storm
Hi Charley,
in hdfs-site.xml you can find the property dfs.ha.namenodes, setting this
property every client will know which NN are elegibles to be active,
nothing else are required in client.
Regards.
2014-06-26 21:30 GMT+02:00 Charley Newtonne cnewto...@gmail.com:
I have hadoop 2.4
I'm writing some scripts to check hdfs status in nagios, but I'm not able
to check journal nodes.
I'd like to check not only that journalnode service is running, which is
simple, I'd like to check latency or sync status. Are there any API,
command to check it?
Regards
Juan Carlos Fernandez
Thanks Lohit, I'll extract useful information from there.
2014-05-27 17:01 GMT+02:00 lohit lohit.vijayar...@gmail.com:
JournalNodes expose latency and txn metrics via jmx. Easier might be to
look at journal node host:8480/jmx
2014-05-27 7:54 GMT-07:00 Juan Carlos juc...@gmail.com:
I'm
Hi Dave,
How many zookeeper servers do you have and where are them?
Juan Carlos Fernández Rodríguez
El 15/03/2014, a las 01:21, dlmarion dlmar...@hotmail.com escribió:
I was doing some testing with HA NN today. I set up two NN with active
failover (ZKFC) using sshfence. I tested that its
Hi Edward,
maybe you are sending your request to the master from the slave. I don't
are sure, but I think that secondary never answer any request, neither read
request, and you have to modify your config files by hand to change your
slave to be master.
I haven't tested so much with master/slave
ye I know they have different implementations, what I wanted to point was
about features. Are there any feature in CapacityScheduler missing in
FairScheduler? AFAIK it's possible to configure a FairScheduler to do
exactly the same as capacity and more, in this case I would see
CapacityScheduler as
I'm reading about them and it looks CapacityScheduler as a particular
configuration of Fair Scheduler (setting FIFO as scheduler in each defined
queue). Can I understand Capacity scheduler in that way or I'm missing
something?
Regards.
of that group can submit jobs to and administer all
queues.
On Thu, Feb 20, 2014 at 11:28 AM, Juan Carlos juc...@gmail.com wrote:
Yes, that is what I'm looking for, but I couldn't find this information
for hadoop 2.2.0. I saw mapreduce.cluster.acls.enabled it's now the
parameter to use
Where could I find some information about ACL? I only could find the
available in
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html,
which isn't so detailed.
Regards
Juan Carlos Fernández Rodríguez
Consultor Tecnológico
Telf: +34918105294
Móvil
are looking for client level ACL, something like the MapReduce
ACLs?
https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html#Job+Authorization
Alex.
2014-02-20 4:58 GMT-05:00 Juan Carlos jcfernan...@cediant.es:
Where could I find some information about ACL? I only could find the
available
Hi Bruno,
ha.zookeeper.quorum is a property in core-site and you have it in
hdfs-site, maybe it's your problem.
2014/1/24 Bruno Andrade b...@eurotux.com
Begin forwarded message:
Date: Tue, 21 Jan 2014 09:35:23 +
From: Bruno Andrade b...@eurotux.com
To: user@hadoop.apache.org
As far as I know, the only authentication method available in hdfs 2.2.0 is
Kerberos, so it's not possible to authenticate with an URL.
Regards
2014/1/10 Pinak Pani nishant.has.a.quest...@gmail.com
Does HDFS provide any build in authentication out of the box? I wanted to
make explicit access
I'm trying to configure a HDFS cluster with HA, kerberos and cipher. For HA
I have used QJM with automatic failover.
Til now I have HA and Kerberos running propertly, but I'm having problems
when try to add cipher. Specifically when I set in core-site.xml the
property hadoop.rpc.protection to
14 matches
Mail list logo