Hi Tariq, I am trying to start wordcount mapreduce, i am not getting how to start and where to start .. i very new to java. can you help how to work with this..any help will appreciated.
Hi All, Please help start with Hadoop on CDH , i have instaleed in my local PC. any help will appreciated. On Thu, Aug 9, 2012 at 9:10 PM, Ellis H. Wilson III <[email protected]>wrote: > Hi all! > > Can someone please briefly explain the difference? I do not see > deprecated warnings for fs.local.block.size when I run with them set and I > see two copies of RawLocalFileSystem.java (the other is > local/RawLocalFs.java). > > The things I really need to get answers to are: > 1. Is the default boosted to 64MB from Hadoop 1.0 to Hadoop 2.0? I > believe it is, but want validation on that. > 2. Which one controls shuffle block-size? > 3. If I have a single machine non-distributed instance, and point it at > file://, do both of these control the persistent data's block size or just > one of them or what? > 4. Is there any way to run with say a 512MB blocksize for the persistent > data and the default 64MB blocksize for the shuffled data? > > Thanks! > > ellis >
