Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems
changing the default block size. My hdfs-site.xml file I've set the property
property
namedfs.blocksize/name
value67108864/value
/property
to have blocks of 64 MB, but it seems that the system
Make it final and bounce namenode
On Nov 20, 2014 3:42 PM, Tomás Fernández Pena tf.p...@usc.es wrote:
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems
changing the default block size. My hdfs-site.xml file I've set the
property
property
It seems HADOOP_CONF_DIR is poiniting different location!!?
May be you can check hdfs-site.xml is in classpath when you execute hdfs
command.
Thanks Regards
Rohith Sharma K S
-Original Message-
From: Tomás Fernández Pena [mailto:tf.p...@gmail.com] On Behalf Of Tomás
Fernández Pena
Hi
Thanks for your kind answers. I've found the problem.
The point is that I'd only specified the dfs.blocksize parameter in the
hdfs-site.xml of the NameNode and datanodes, but no in the client.
My question now is, how can I avoid that the client change the value of
blocksize? I've tried to
Hi,
as I said before, I wrote TableInputFormat and RecordReader extension that
reads input data from an Hbase table,
in my case every single row is associated with a single InputSplit.
For example if I have 30 rows to process, my custom TableInputFormat
will generate 30 input splits
-- Forwarded message --
From: Mike Rapuano mrapu...@choicestream.com
Date: Thu, Nov 20, 2014 at 10:05 AM
Subject: Upgrade from 4.6 to 5.2
To: user-h...@hadoop.apache.org
Trying to upgrade a cluster running cdh4.6 to cdh5.2 it hangs upgrading
hdfs metadata with Namenode RPC Wait.
hi all,
We are testing the zkfc for name node HA. And I see the design of the zkfc,
there is a monitor thread which will monitor the healthy of name node and the
design
mention that the RPC timeout option, i want to know how to configure this
option and
if the option is used just used when the
Hi there,
I am writing a hadoop streaming job where certain columns contain natural
languages. In that case, use '\t' as the default delimiter is not a choice
for me.
Does anything know how to pass a non printing character, like SOH 'start of
header' as the key/value separator?
I tried to pass
We just start doing the work, so it’s not available even in trunk.
Regards,
Yi Liu
From: Vincent,Wei [mailto:weikun0...@gmail.com]
Sent: Wednesday, November 19, 2014 3:08 PM
To: user@hadoop.apache.org
Subject: Re: About HDFS-RAID
Thanks Liu
But I found that the HDFS-7285 still open, so it
hi, all
I am using the name node HA feature(zkfc), and there is a configuration:
ha.health-monitor.rpc-timeout.ms,
There are two situation:
1. Active name node is down
if the zkfc will wait ha.health-monitor.rpc-timeout.ms, then call the failover
? Or not wait?
2. Active name node is too busy
10 matches
Mail list logo