hi, Chris.
I have traced the source code, and I find this issue comes from
sbin/start-dfs.sh,
SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey
dfs.namenode.shared.edits.dir 2-)
If I set the suffix on dfs.namenode.shared.edits.dir.[namespace id].[nn
id], it will get null. So please
Hi Bing,
HA is not confilct with HDFS federation.
for example, you have two name services: cluster1, cluster2,
then,
property
namedfs.namenode.shared.edits.dir/name
valueqjournal://n1.com:8485;n2.com:8485/cluster1/value
/property
property
namedfs.namenode.shared.edits.dir/name
Hi, all
We try to use hadoop-2.0.5-alpha, using two namespaces, one is for hbase
cluster, and the other one is for common use.At the same time, we use
Quorum Journal policy as HA.
GS-CIX-SEV0001, GS-CIX-SEV0002, namenodes in hbasecluster namespace
GS-CIX-SEV0003, GS-CIX-SEV0004, namenodes in
This is because you don't use the same clusterID. all data nodes and
namenodes should use the same clusterID.
On Thu, Jul 4, 2013 at 3:12 PM, Bing Jiang jiangbinglo...@gmail.com wrote:
Hi, all
We try to use hadoop-2.0.5-alpha, using two namespaces, one is for hbase
cluster, and the other
Additional,
If these are two new clusters, then on each namenode, using hdfs namenode
-format -clusterID yourID
But if you want to upgrade these two clusters from NonHA to HA, then using
bin/start-dfs.sh -upgrade -clusterID yourID
On Thu, Jul 4, 2013 at 3:14 PM, Azuryy Yu azury...@gmail.com
Hi ,
Please check whether u have permission to the Output directory . Error
say that u dont have permission .
Regards
Syed
Thanks and Regards,
S SYED ABDUL KATHER
On Thu, Jul 4, 2013 at 10:25 AM, devara...@huawei.com [via Lucene]
Thanks for Azuryy's reply.
I used configuration:
property
namedfs.namenode.shared.edits.dir.hbasecluster.hnn1/name
valueqjournal://GS-CIX-SEV0001:8485;GS-CIX-SEV0002:8485;GS-CIX-SEV0003:8485/hbasecluster/value
/property
property
namedfs.namenode.shared.edits.dir.hbasecluster.hnn2/name
If not set cluster id in formatting the Namenode, is there a policy in hdfs
to guarantee the even of distributing DataNodes into different Namespace,
or just randomly?
2013/7/4 Azuryy Yu azury...@gmail.com
Additional,
If these are two new clusters, then on each namenode, using hdfs namenode
Hi,All
Does anyone know when the status of datanode switch from live to dead at the
internal of namenode ?
the scenario:
When i stopped a datanode with command, the status of that datanode in the
web UI of namenode displays 'live' and 'In Service' for almost 5 minutes.
I know the
It's random.
On Jul 4, 2013 3:33 PM, Bing Jiang jiangbinglo...@gmail.com wrote:
If not set cluster id in formatting the Namenode, is there a policy in
hdfs to guarantee the even of distributing DataNodes into different
Namespace, or just randomly?
2013/7/4 Azuryy Yu azury...@gmail.com
Hi,
It's 10 minutes and 30s.
See the stale mode described in HDFS-3703 if you need something shorter.
Cheers,
Nicolas
On Thu, Jul 4, 2013 at 10:05 AM, Francis.Hu francis...@reachjunction.comwrote:
Hi,All
** **
Does anyone know when the status of datanode switch from live to dead
Hye Azuryy,
During the import, dfsadmin -report :
DFS Used%: 17.72%
Moreover, it succeeds from time to time w/ the same data load. It seems
that Datanode appears to be down to the Namenode, but why ?
On Thu, Jul 4, 2013 at 3:31 AM, Azuryy Yu azury...@gmail.com wrote:
Hi Manuel,
Nicolas,
Thanks for your help. had a look at HDFS-3703.
So I need to turn on dfs.namenode.check.stale.datanode
and set a shorter time for dfs.namenode.stale.datanode.interval
Thanks,
Francis.Hu
发件人: Nicolas Liochon [mailto:nkey...@gmail.com]
发送时间: Thursday, July 04, 2013 16:28
Yes. The default is 30s.
On Thu, Jul 4, 2013 at 11:10 AM, Francis.Hu francis...@reachjunction.comwrote:
Nicolas,
** **
Thanks for your help. had a look at HDFS-3703.
** **
So I need to turn on dfs.namenode.check.stale.datanode
and set a shorter time for
I could get containers on specific nodes using addContainerRequest() on
AMRMClient. But there are issues with it. I have two nodes, node1 and node2
in my cluster. And, my Application Master is trying to get 3 containers on
node1, and 3 containers on node2 in that order.
While trying to request on
Hi Guys,
How do I tell hadoop to wait longer before fixing Under-Replication when
a node dies. Currently I have a node down which crashed, and hadoop
detected it down and is busy fixing Under-Replication. The problem is
that the extra load required to fix replication is causing our
Hello Manickam,
Append is currently not possible.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jul 4, 2013 at 4:40 PM, Manickam P manicka...@outlook.com wrote:
Hi,
I have moved my input file into the HDFS location in the cluster setup.
Now i got a new set of file which has
To guarantee nodes on a specific container you need to use the whitelist
feature we added recently:
https://issues.apache.org/jira/browse/YARN-398
Arun
On Jul 4, 2013, at 3:14 AM, Krishna Kishore Bonagiri write2kish...@gmail.com
wrote:
I could get containers on specific nodes using
Hi all,
I'm trying to copy files from a source HDFS cluster to another. But I have
numerous files open for writing, and DistCp fails on thoses ones. I've
found a reference of that on jira
https://issues.apache.org/jira/browse/MAPREDUCE-2160
Any workaround ? Anyone faced that too ?
Thanks
You can append using WebHDFS.. Following link may help you--
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Append_to_a_File
On Thu, Jul 4, 2013 at 5:17 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Manickam,
Append is currently not possible.
Thanks Arun, it seems to be available with 2.1.0-beta, when will that be
released? Or if I want it now, could I get from the trunk?
-Kishore
On Thu, Jul 4, 2013 at 5:58 PM, Arun C Murthy a...@hortonworks.com wrote:
To guarantee nodes on a specific container you need to use the whitelist
Hi John,
I have just started pulling Twitter conversions using Apache Flume. But I
have not started processing the pulled data yet. And my answers below:
1) How large is each JSON document?
Averages from 100 KB to 2 MB. Flume rolls a new file every 1 minutes (which
is configurable). So the
Hi I'm developing a new set of InputFormats that are used for a project I'm
doing. I found that there are two ways to create a new InputFormat.
1- Extend the abstract class org.apache.hadoop.mapreduce.InputFormat
2- Implement the interface org.apache.hadoop.mapred.InputFormat
I don't know why
Manickam,
HDFS supports append; it is the command-line client that does not.
You can write a Java application that opens an HDFS-based file for append, and
use that instead of the hadoop command line.
However, this doesn't completely answer your original question: How do I move
only the delta
Arun,
I'm don't know how to interpret the release schedule from the JIRA. It says
that the patch targets 2.1.0 and it is checked into the trunk, does that mean
it is likely to be rolled into the first Hadoop 2 GA or will it have to wait
for another cycle?
Thanks,
John
From: Arun C Murthy
John,
This feature will available in the upcoming 2.1.0-beta release. The first
release candidate (RC) has been cut, but it seems a new RC will be needed.
The exact release date is still not known but it should be soon.
thanks.
On Thu, Jul 4, 2013 at 2:43 PM, John Lilley
The current stable release doesn't support append, not even through the
API. If you really want this you have to switch to hadoop 2.x.
See this JIRA https://issues.apache.org/jira/browse/HADOOP-8230.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jul 5, 2013 at 3:05 AM, John Lilley
Hi,
I am using hadoop-2.0.5-alpha, and I added 5 datanodes into dfs_exclude,
hdfs-site.xml:
property
namedfs.hosts.exclude/name
value/usr/local/hadoop/conf/dfs_exclude/value
/property
then:
hdfs dfsadmin -refreshNodes
but there is no decomssion nodes showed on the webUI. and not any
Do you see any log related to this in Name Node logs when you issue
refreshNodes dfsadmin command?
Thanks
Devaraj k
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: 05 July 2013 08:12
To: user@hadoop.apache.org
Subject: Decomssion datanode - no response
Hi,
I am using hadoop-2.0.5-alpha, and
Thanks Devaraj,
There are no any releated logs in the NN log and DN log.
On Fri, Jul 5, 2013 at 11:14 AM, Devaraj k devara...@huawei.com wrote:
Do you see any log related to this in Name Node logs when you issue
refreshNodes dfsadmin command?
** **
Thanks
Devaraj k
** **
I know the default value is 10 minutes and 30 seconds for switching
datanodes from live to dead.
发件人: Azuryy Yu [mailto:azury...@gmail.com]
发送时间: Friday, July 05, 2013 10:42
收件人: user@hadoop.apache.org
主题: Decomssion datanode - no response
Hi,
I am using hadoop-2.0.5-alpha, and I added
Hi Kishore,
hadoop-2.1.0 beta release is in voting process now.
You can try out from hadoop-2.1.0 beta RC
http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc0/ or you could check
the same with trunk build.
Thanks
Devaraj k
From: Krishna Kishore Bonagiri [mailto:write2kish...@gmail.com]
It's 20 minutes passed after I ran -refreshNodes, but there is no
decomssion nodes showed on the UI. and canno find any hints in the NN and
DN logs.
On Fri, Jul 5, 2013 at 11:16 AM, Francis.Hu francis...@reachjunction.comwrote:
I know the default value is 10 minutes and 30 seconds for
A trainer at Hortonworks told me that org.apache.hadoop.mapred is the old
package.
So for all intent and purposes use the new one: org.apache.hadoop.mapreduce.
Otto out!
From: Ahmed Eldawy [mailto:aseld...@gmail.com]
Sent: July-04-13 2:30 PM
To: user@hadoop.apache.org
Subject: Which
When did you add this configuration in NN conf?
property
namedfs.hosts.exclude/name
value/usr/local/hadoop/conf/dfs_exclude/value
/property
If you have added this configuration after starting NN, it won't take effect
and need to restart NN.
If you have added this config with the
I added dfs.hosts.exclude before NN started.
and I updated /usr/local/hadoop/conf/dfs_exclude whith new hosts, but It
doesn't decomssion.
On Fri, Jul 5, 2013 at 11:39 AM, Devaraj k devara...@huawei.com wrote:
When did you add this configuration in NN conf?
property
And also could you check whether the client is connecting to NameNode or any
failure in connecting to the NN.
Thanks
Devaraj k
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: 05 July 2013 09:15
To: user@hadoop.apache.org
Subject: Re: Decomssion datanode - no response
I added
Client hasn't any connection problem.
On Fri, Jul 5, 2013 at 12:46 PM, Devaraj k devara...@huawei.com wrote:
And also could you check whether the client is connecting to NameNode or
any failure in connecting to the NN.
** **
Thanks
Devaraj k
** **
*From:* Azuryy Yu
Hi Ahmed,
Hadoop 0.20.0 included the new mapred API, these sometimes
refer as context objects. These are designed to make API easier to evolve in
future. There are some differences between new old API's,
The new API's favour abstract classes rather than interfaces, since
39 matches
Mail list logo