I saw a similar post 
(http://www.mail-archive.com/[email protected]/msg01112.html)
but the answer was not very satisfactory.

Image I used Hadoop as a fault-tolerance storage.  I had 10 nodes, each loaded 
with 200GBs.
I found the nodes were overloaded and decided to add 2 new boxes with bigger 
disk spaces.
How do I redistribute the existing data?  I don't want to bump up the 
replication factor since 
the old nodes were already overloaded.  It'd be very helpful if this function 
could be implemented 
at the system level.

Thanks.

Reply via email to