sions like 0.12.x or earlier.
Has anybody done upgrade from 0.12.x? If so, would you mind sharing your
joyful/painful experience with me?
Or any tips and/or advices will be appreciated.
Thank you in advance,
Regards,
--
Taeho Kang [tkang.blogspot.com]
Software Engineer, NHN Corporation, Korea
ates... woo.. here we go again. Hadoop is not designed to handle this
> > need. Basically, its HDFS is designed for large files that rarely change
> -
> >
>
> Yes, understood. I could think of replacing whole slaps, or delete slaps
> once all contained files are obsolete.
>
> > Let us know how your situation goes.
> >
>
> Will do.
>
> Lars
>
>
--
Taeho Kang [tkang.blogspot.com]
Software Engineer, NHN Corporation, Korea
to handle this
need. Basically, its HDFS is designed for large files that rarely change -
no appending, no update in the middle. HBase needs update/append features to
do what it wants to do, but I haven't really had much experience with it to
give you a comment on how well it works.
Let us know ho
to be made without the provision
> >>> of an upgrade utility?
> >>>
> >>> If not, are you willing to accept the risk that the upgrade
> >>> may fail if you have corruption in your root or meta regions?
> >>>
> >>> After HADOOP-2478, we will be able to build a fault tolerant
> >>> upgrade utility, should HBase's file structure change again.
> >>> Additionally, we will be able to provide the equivalent of
> >>> fsck for HBase after HADOOP-2478.
> >>>
> >>> ---
> >>> Jim Kellerman, Senior Engineer; Powerset
> >>>
> >>> No virus found in this outgoing message.
> >>> Checked by AVG Free Edition.
> >>> Version: 7.5.516 / Virus Database: 269.17.13/1207 - Release Date:
> 1/2/2008
> >>> 11:29 AM
> >>>
> >>>
> >>>
> >>>
> >
> >
>
--
Taeho Kang [tkang.blogspot.com]
Software Engineer, NHN Corporation, Korea
other way that I can get an instance of JobTracker?
Also, what was the reason of removing getTracker() method? (I am just being
curious here...)
Thank you for all your help in advance,
/taeho
--
Taeho Kang [tkang.blogspot.com]
Yes, you're right, Avinash.
Thank you.
p.s. Can someone please fix API comments in hdfs.h file in the next release?
On 10/8/07, Avinash Lakshman <[EMAIL PROTECTED]> wrote:
>
> I believe it is hdfsFreeFileInfo().
>
> Avinash
>
> -Original Message-
>
L on error.
*/
hdfsFileInfo *hdfsListDirectory(hdfsFS fs, const char* path,
int *numEntries);
--
Taeho Kang [tkang.blogspot.com]
, Raghu Angadi <[EMAIL PROTECTED]> wrote:
>
>
> (I am not sure if I replied already...)
>
> Taeho Kang wrote:
> > Thanks for your quick reply, Raghu.
> >
> > The problem I am faced with is...
> > - I need to move my machines to a new location
>
> Assum
info) once the cluster starts up in
the new location.
Is it going to be a problem?
On 10/4/07, Raghu Angadi <[EMAIL PROTECTED]> wrote:
>
> Taeho Kang wrote:
> > Hello all.
> >
> > Due to limited space in current datacenter, I am trying to move my
> Hadoop
> &
an
ip address?
Also, what was the motivation behind using an ipaddress instead of a
hostname to identify datanodes?
--
Taeho Kang [tkang.blogspot.com]
Software Engineer, NHN Corporation, Korea
Thanks for your answers and clarifications.
I will try to do some more benchmark testing with more nodes and keep you
guys posted.
On 9/14/07, Owen O'Malley <[EMAIL PROTECTED]> wrote:
>
>
> On Sep 13, 2007, at 2:20 AM, Taeho Kang wrote:
>
> > I did run WordCou
1 : 4 Longest Time Taken for Map
41 83 1 : 2 Longest Time Taken for Reduce
58 264 1 : 4.5
Any guess or idea on how to improve the performance of C++ MapReduce?
Taeho
--
Taeho Kang [tkang.blogspot.com]
mode?
Any comments and ideas will be appreciated. Thank you in advance.
Regards,
Taeho
--
Taeho Kang [tkang.blogspot.com]
vy requirement.
>
> Is anybody using HDFS as a long term storage solution? Interested in any
> info. Thanks
>
> - ds
>
>
> -
> Yahoo! oneSearch: Finally, mobile search that gives answers, not web
> links.
--
Taeho Kang [tkang.blogspot.com]
Software Engineer, NHN Corporation, Korea
e metadata management
using a DB? (maybe as a subproject?)
Do you think it will make the system more scalable or will the additional
complexity of using the DB is not worthy of consideration?
/Taeho
On 8/29/07, Sameer Paranjpye <[EMAIL PROTECTED]> wrote:
>
>
>
> Taeho Kang wrote:
>
your
> installation. All these numbers are available via NamenodeFsck.Result
>
> HADOOP-1687 ( http://issues.apache.org/jira/browse/HADOOP-1687) has a
> detailed discussion of the amount of memory used by Namenode data
> structures.
>
> Sameer
>
> Taeho Kang wrote:
> &g
ss in
org.apache.hadoop.dfspacakge. Am I correct here in using NamenodeFsck?
Or, has anybody done similar experiments?
Any comments/suggestions will be appreciated.
Thanks in advance.
Best Regards,
--
Taeho Kang
Software Engineer, NHN Corporation, Seoul, South Korea
Homepage : tkang.blogspot.com
17 matches
Mail list logo