On Mon, May 17, 2010 at 5:10 PM, jiang licht wrote:
> I am considering to use a machine to save a
> redundant copy of HDFS metadata through setting dfs.name.dir in
> hdfs-site.xml like this (as in YDN):
>
>
>dfs.name.dir
>/home/hadoop/dfs/name,/mnt/namenode-backup
>true
>
>
> where
I followed the steps mentioned here:
http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission to
decommission a data node. What I see from the namenode is the hostname of
the machine that I decommissioned shows up in both the list of dead nodes
but also live nodes where its admin status
There is a new feature called concat(), which concatenates files consisting of
full blocks.
So the ideas is to copy individual blocks in parallel, then concatenate them
once they are
copied back into original files.
You will have to write some code to do this or modify distcp.
This is in 0.22/21
Hi,
Is there a way to parallelize copy of really large files ?
From my understanding, currently a each map in distcp copies one file.
So for really large files, this would be pretty slow if number of files
is really large.
Thanks,
Mridul
I am considering to use a machine to save a
redundant copy of HDFS metadata through setting dfs.name.dir in hdfs-site.xml
like this (as in YDN):
dfs.name.dir
/home/hadoop/dfs/name,/mnt/namenode-backup
true
where the two folders are on different machines so that /mnt/namenode-backup
Sorry for bothering everyone, I accidentally configured my dfs.data.dir and
mapred.local.dir to the same directory... Bad copy/paste job.
Thanks for everyone's help!
So I pulled everything of NFS and I'm still getting the original error with a
FileNotFoundException for current/VERSION.
I only have 4 slaves and scp'ed the Hadoop directory to all 4 slaves.
Any other ideas?
On May 14, 2010, at 7:41 PM, Hemanth Yamijala wrote:
> Andrew,
>
>> Just to be clear,
http://lucene.apache.org/java/3_0_1/api/all/org/apache/lucene/search/ParallelMultiSearcher.html
Ian
AĆ©cio writes:
> Thanks for the replies.
>
> I'm already investigating how katta works and how I can extend it.
> What do you mean by distributed search capability? Lucene provives any way
> to "
Brian Bockelman wrote:
On May 17, 2010, at 5:25 AM, Steve Loughran wrote:
Brian Bockelman wrote:
On May 14, 2010, at 8:27 PM, Todd Lipcon wrote:
Hey Brian,
Yep, excessive GC definitely sounds like a likely culprit. I'm surprised you
didn't see OOMEs in the log, though.
We didn't until the
On May 17, 2010, at 5:25 AM, Steve Loughran wrote:
> Brian Bockelman wrote:
>> On May 14, 2010, at 8:27 PM, Todd Lipcon wrote:
>>> Hey Brian,
>>>
>>> Yep, excessive GC definitely sounds like a likely culprit. I'm surprised you
>>> didn't see OOMEs in the log, though.
>>>
>> We didn't until the
Brian Bockelman wrote:
On May 14, 2010, at 8:27 PM, Todd Lipcon wrote:
Hey Brian,
Yep, excessive GC definitely sounds like a likely culprit. I'm surprised you
didn't see OOMEs in the log, though.
We didn't until the third restart today. I have no clue why we haven't seen
this in the past
11 matches
Mail list logo