Any thoughts ?
Thanks,
On Thu, Feb 26, 2015 at 7:30 PM, Manoj Samel manojsamelt...@gmail.com
wrote:
On a kerberos based Hadoop cluster, a kinit is done and then oozie command
is executed. This works every time (thus no setup issues), except once it
failed with following error.
Error
On a kerberos based Hadoop cluster, a kinit is done and then oozie command
is executed. This works every time (thus no setup issues), except once it
failed with following error.
Error: AUTHENTICATION : Could not authenticate, GSSException: No valid
credentials provided (Mechanism level: Generic
Cloureda also has good documentation on setting up kerberos based cluser -
This can be used even if you are not using cloudera manager to setup your
cluster.
On Wed, Feb 18, 2015 at 4:51 PM, Alexander Pivovarov apivova...@gmail.com
wrote:
Have you added all host specific principals in kerberos database ?
Thanks,
On Tue, Feb 3, 2015 at 7:59 AM, 郝东 donhof...@163.com wrote:
I am converting a secure non-HA cluster into a secure HA cluster. After
the configuration and started all the journalnodes, I executed the
following commands
Environment is Hadoop 2.3.0, CDH 5.0, RM and NN in HA, Kerberos Security
Rolling reboot of cluster was done. Services on each node was not stopped
before, the machines were just shut down, rebooted and services started on
each after reboot. Nodes were shut down in rolling manner such that one
x-posting to Hadoop
See following error. Hadoop version is 2.3 (CDH 5.0). Name node and
Resource Manager are in HA configuration
Any thoughts ?
Thanks,
-- Forwarded message --
From: Manoj Samel manojsamelt...@gmail.com
Date: Thu, Jan 8, 2015 at 7:33 PM
Subject: Re: Running spark
Hadoop 2.4.0 mentions that FSImage is stored using protobuf. So upgrade
from 2.3.0 to 2.4 would work since 2.4 can read old (2.3) binary format and
write the new 2.4 protobuf format.
After using 2.4, if there is a need to downgrade back to 2.3, how would
that work ?
Thanks,
The HA namenode also should run hadoop-hdfs-zkfc which is the zookeeper
failover controller for HA
On Thu, Oct 9, 2014 at 11:42 PM, oc tsdb oc.t...@gmail.com wrote:
One more query we have -
Standby namenode should be running with all the services that are running
on active name node? or
Reposting ...
One option is to do hdfs dfsadmin -report and see DFS Used% on each
data node and then compute the extent of imbalance across nodes. Is there
any other way ?
Thanks,
On Wed, Oct 8, 2014 at 3:33 PM, Manoj Samel manojsamelt...@gmail.com
wrote:
Hi,
Before running hadoop
Hi,
Not clear how this computation is done
For sake of discussion Say the machine with data node has two disks /disk1
and /disk2. And each of these disk has a directory for data node and a
directory for non-datanode usage.
/disk1/datanode
/disk1/non-datanode
/disk2/datanode
/disk2/non-datanode
, Manoj Samel manojsamelt...@gmail.com
wrote:
Hi,
Not clear how this computation is done
For sake of discussion Say the machine with data node has two disks
/disk1 and /disk2. And each of these disk has a directory for data node and
a directory for non-datanode usage.
/disk1/datanode
/disk1
So, in that case, the resource manager will allocate containers of
different capacity based on node capacity ?
Thanks,
On Wed, Oct 8, 2014 at 9:42 PM, Nitin Pawar nitinpawar...@gmail.com wrote:
you can have different values on different nodes
On Thu, Oct 9, 2014 at 4:15 AM, Manoj Samel
Quorum services like journal node (and zookeeper) need to have at least 3
instances running
On Thu, Oct 9, 2014 at 4:19 AM, oc tsdb oc.t...@gmail.com wrote:
Hi,
We have cluster with 3 nodes (1 namenode + 2 datanodes).
Cluster is running with hadoop 2.4.0 version.
We would like to add High
Hi,
Before running hadoop rebalancer, it is possible to find the extent to
which the data nodes are unbalanced ?
Thanks,
In a hadoop cluster where different machines have different memory capacity
and / or different # of cores etc., it is required that memory/core related
parameters be set to SAME for all nodes ? Or it is possible to set
different values for different nodes ?
E.g. can
(reposting since no reply first time) ...
Hi,
For yarn.resourcemanager.zk-state-store.root-node.acl, the yarn-default.xml
says For fencing to work, the ACLs should be carefully set differently on
each ResourceManger such that all the ResourceManagers have shared admin
access and the Active
Any info on this will be appreciated.
Thanks,
On Wed, May 7, 2014 at 3:01 PM, Manoj Samel manojsamelt...@gmail.comwrote:
Hi,
There are some JIRAs for supporting symbolic links in HDFS (e.g. HDFS-245,
HADOOP-6421) but that feature does not seem to be available, at least from
HDFS commands
Hi,
For yarn.resourcemanager.zk-state-store.root-node.acl, the yarn-default.xml
says For fencing to work, the ACLs should be carefully set differently on
each ResourceManger such that all the ResourceManagers have shared admin
access and the Active ResourceManger takes over (exclusively) the
Second attempt.
On Thu, Apr 17, 2014 at 6:44 PM, Manoj Samel manojsamelt...@gmail.comwrote:
Hi,
Following seq is done
hdfs dfs -mkdir /a
take snapshot s_0
hdfs dfs -mkdir -p /a/b/c
hdfs dfs -put foo /a/b/c
take snapshot s_1
Now the command line snapshotdiff between s_0 and s_1 shows
, Apr 15, 2014 at 9:29 AM, Manoj Samel manojsamelt...@gmail.com
mailto:manojsamelt...@gmail.com wrote:
Hi,
Is it correct to say that the offline image viewer does not accounts
for any edits that are not yet merged into the fsimage?
Thanks,
--
Cheers
-MJ
Any thoughts ?
On Wed, Apr 9, 2014 at 10:08 AM, Manoj Samel manojsamelt...@gmail.comwrote:
Hi,
If I take HDFS snapshot and then restore is to some other directory using
hdfs dfs -cp /xxx/.snapshot/nnn /aaa/bbb
want to confirm that there is a copy of data from files under snapshot
Hi,
It seems the only restore from a HDFS snapshot using hdfs command line is
copy snapshot files to a target path.
If the use cases are
0. stuff ...
1. Take snapshot s_N
2. Add some files, delete other files
3. Take snapshot s_N+1
then copying s_N+1 to target just copies the newly
snapshot
rollback/restore functionality in HDFS. Thus users have to manually
copy/delete files according to the snapshot diff report. There's an open
jira HDFS-4167 for it. We plan to provide this support soon.
Thanks,
-Jing
On Mon, Apr 14, 2014 at 2:14 PM, Manoj Samel manojsamelt
Hi,
In the SnapshotDiffReport class
public enum DiffType {
CREATE(+),
MODIFY(M),
DELETE(-),
RENAME(R);
...
If I do a mv on a file, in the snapshot diff, it shows as delete of old
name and creation of new name. What constitutes a RENAME ?
Thanks,
Manoj
Hi,
Is it correct to say that the offline image viewer does not accounts for
any edits that are not yet merged into the fsimage?
Thanks,
Hi,
If I take HDFS snapshot and then restore is to some other directory using
hdfs dfs -cp /xxx/.snapshot/nnn /aaa/bbb
want to confirm that there is a copy of data from files under snapshot to
the target director. I.e. there is no linking of new directory files to
existing directory or other
Hi,
Hadoop version is CDH5 Beta1
Name node and Resource managers have been configured in HA mode.
After kerberos is enabled, the resource manager log shows following error
2014-03-25 22:21:06,854 WARN org.apache.hadoop.ipc.Client: Exception
encountered while connecting to the server :
Hi,
From the documentation + code, when kerberos is enabled, all tasks are
run as the end user (e..g as user joe and not as hadoop user mapred)
using the task-controller (which is setuid root and when it runs, it does a
setuid/setgid etc. to Joe and his groups ). For this to work, user joe
linux
* Name node and secondary name nodes on different machines
* Kerberos was just enabled
* Cloudera CDH 4.5 on Centos
*Secondary name node log (HOST2) shows following*
2013-12-31 22:00:11,728 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs/2NN-host@REALM
29 matches
Mail list logo