Are you trying to serve blocks from a shared directory e.g. NFS?
The storageID for a node is recorded in a file named "VERSION" in
${dfs.data.dir}/current. If one node claims that the storage directory is
already locked, and another node is reporting the first node's storageID, it
makes me think t
Brian Bockelman wrote:
On Aug 24, 2009, at 5:42 AM, Steve Loughran wrote:
Raghu Angadi wrote:
Suresh had made an spreadsheet for memory consumption.. will check.
A large portion of NN memory is taken by references. I would expect
memory savings to be very substantial (same as going from 64bi
Hi,
Today I have a problem with run job on hadoop with the log message "Task
process exit with nonzero status of 126 Error reading task ouput.",but I
can put iles in HDFS or get some files from HDFS sucessfully. I formated
the namenode and start hadoop without any error log messsage, when I r
fs.default.name is the run the namenode machine,is it 192.168.1.103?
zjffdu wrote:
>
> Hi all,
>
>
>
> I have two computers, and in the hadoop-site.xml, I define the
> fs.default.name as localhost:9000, then I cannot access the cluster with
> Java API from another machine
>
> But if I chang
On Tue, Aug 25, 2009 at 2:57 AM, Steve Loughran wrote:
> On petascale-level computers, the application codes' CPU instructions are
>> about 10% floating point (that is, in scientific applications, there are
>> less floating point instructions than in most floating point benchmarks).
>> Of the re
Is there any way to set the replication to a specific number while
FTP-ing to the HDFS?
I have dfs.replication set to 2 on the nodes, but when I transfer
something via FTP to HDFS it replicates 3 times.
Thanks.
--
-Turner Kunkel
Hey Turner,
Replication level is requested by the client. You need to set the
replication factor on the hadoop configuration of the client, not the
nodes.
Brian
On Aug 25, 2009, at 11:28 AM, Turner Kunkel wrote:
Is there any way to set the replication to a specific number while
FTP-ing
Gotcha, thanks.
Any idea how to do that on a Windows machine? My Hadoop cluster is on 3
Ubuntu nodes, but I'm transferring data from a bunch of Windows servers. Or
can I configure replication in the FTP client itself?
-Turner
Brian Bockelman wrote:
>
> Hey Turner,
>
> Replication level is
It is fairly straight forward, on completion of a successful fetch, the
total amount of bytes fetched is divided by the total time taken till then.
Please look at fetchOutputs method in ReduceTask.java, the portion of code
that handles successful copies.
Jothi
On 8/25/09 8:23 AM, "bharath vissa
On Tue, Aug 25, 2009 at 11:59 AM, Ted Dunning wrote:
> On Tue, Aug 25, 2009 at 2:57 AM, Steve Loughran wrote:
>
>> On petascale-level computers, the application codes' CPU instructions are
>>> about 10% floating point (that is, in scientific applications, there are
>>> less floating point instruct
Hi all,
I know this has been filed as a JIRA improvement already
http://issues.apache.org/jira/browse/HDFS-343, but is there any good
workaround at the moment? What's happening is I have added a few new EBS
volumes to half of the cluster, but Hadoop doesn't want to write to them.
When I try to
Change the ordering of the volumes in the ocnfig files.
On Tue, Aug 25, 2009 at 12:51 PM, Kris Jirapinyo wrote:
> Hi all,
>I know this has been filed as a JIRA improvement already
> http://issues.apache.org/jira/browse/HDFS-343, but is there any good
> workaround at the moment? What's happen
The order matters?
On Tue, Aug 25, 2009 at 1:16 PM, Ted Dunning wrote:
> Change the ordering of the volumes in the ocnfig files.
>
> On Tue, Aug 25, 2009 at 12:51 PM, Kris Jirapinyo >wrote:
>
> > Hi all,
> >I know this has been filed as a JIRA improvement already
> > http://issues.apache.or
It used to matter quite a lot.
On Tue, Aug 25, 2009 at 1:25 PM, Kris Jirapinyo
wrote:
> The order matters?
>
>
Changing the ordering of dfs.data.dir won't change anything, because
dfs.data.dir is written to in a round-robin fashion.
Kris, I think you're stuck with the hack you're performing :(. Sorry I
don't have better news.
Alex
On Tue, Aug 25, 2009 at 1:16 PM, Ted Dunning wrote:
> Change the orderi
Hey there,
Apologies for this not going out sooner -- apparently it was sitting
as a draft in my inbox. A few of you have pinged me, so thanks for
your vigilance.
It's time for another Hadoop/Lucene/Apache Stack meetup! We've had
great attendance in the past few months, let's keep it up! I'm alwa
For JMX you can also look at JMXGet.java class. You can use this object to
get the data thru JMX.
Boris
On 8/24/09 1:27 AM, "Stas Oskin" wrote:
> Hi.
>
> One way you can do this is thought JMX.
>>
>>
>> http://www.jointhegrid.com/svn/hadoop-cacti-jtg/trunk/src/com/jointhegrid/had
>> oopjmx/
I'm also suspecting that this error can be triggered by using IPMP on
Solaris and other tricks that cause multiple nics on the host to be
represented.
[It doesn't appear that one can force traffic outbound on a specific nic, so
when the data node does the block report to the data node it can be
For now you are stuck with the hack. Sooner or later hadoop has to
handle heterogeneous nodes better.
In general it tries to write to all the disks irrespective of % full
since that gives the best performance (assuming each partition's
capabilities are same). But it is lame at handling skews
How does copying the subdir work? What if that partition already has the
same subdir (in the case that our partition is not new but relatively
new...with maybe 10% used)?
Thanks for the suggestions so far guys.
Kris.
On Tue, Aug 25, 2009 at 5:01 PM, Raghu Angadi wrote:
>
> For now you are stu
Kris Jirapinyo wrote:
How does copying the subdir work? What if that partition already has the
same subdir (in the case that our partition is not new but relatively
new...with maybe 10% used)?
You can copy the files. There isn't really any requirement on number of
files in directory. somethi
I am compiling hadoop 0.20 yahoo version using the following command and got an
exception.
Where should I put fop.jar (I got fop from
http://www.hightechimpact.com/Apache/xmlgraphics/fop/binaries/fop-0.95-bin.tar.gz
) to make it work?
Zheng
ant clean -Dversion=0.20.1 -Dcompile.native=true -Dc
Hi
I am facing an issue while starting my hadoop cluster.
When I run the command:
$ bin/hadoop namenode -format
I found this exception:
/home/HadoopAdmin/hadoop-0.18.3/bin/../conf/hadoop-env.sh: line 2:
$'\r': command not found
/home/HadoopAdmin/hadoop-0.18.3/bin/../conf/hadoop
can you run "dos2unix /home/HadoopAdmin/hadoop-0.18.3/bin/../conf/*.sh" and
then try again.
Thanks,
-Vikas.
On Wed, Aug 26, 2009 at 11:56 AM, Puri, Aseem wrote:
> Hi
>
>
>
> I am facing an issue while starting my hadoop cluster.
>
>
>
> When I run the command:
>
>
>
> $ bin/hadoop namenode -form
24 matches
Mail list logo