Re: merging into MapFile

2008-12-10 Thread yoav.morag

let me rephrase my question : 
are all the parts of a MapFile necessarily affected by a merge ? if so, it's
not scalable, no matter what is the block size is.
however, since MapFile is essentially a directory and not a file, I don't
see a reason why all parts should be affected. can anyone comment on the
actual implementation of the merge algorithm ? 


Elia Mazzawi-2 wrote:
 
 it has to do with the data block size,
 
 I had many small files and the performance because much better when i 
 merged them,
 
 the default block size is 64Mb so redo your files to = 64MB (what i did 
 and recommend)
 or reconfigure your hadoop.
 
 property
   namedfs.block.size/name
   value67108864/value
   descriptionThe default block size for new files./description
 /property
 
 do something like
 cat * | rotatelogs ./merged/m 64M
 it will merge and chop the data for you.
 
 yoav.morag wrote:
 hi all -
 can anyone comment on the performance cost of merging many small files
 into
 an increasingly large MapFile ? will that cost be dependent on the size
 of
 the larger MapFile (since I have to rewrite it) or is there a built-in
 strategy to split it into smaller parts, affecting only those which were
 touched ? 
 thanks -
 Yoav.
   
 
 
 

-- 
View this message in context: 
http://www.nabble.com/merging-into-MapFile-tp20914388p20930594.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: File loss at Nebraska

2008-12-10 Thread Steve Loughran

Doug Cutting wrote:

Steve Loughran wrote:
Alternatively, why we should be exploring the configuration space 
more widely


Are you volunteering?

Doug


Not yet. I think we have a fair bit of what is needed, and it would make 
for some interesting uses of the Yahoo!-HP-Intel cirrus testbed.


There was some good work done at U Maryland on Skoll; a tool for 
efficiently exploring the configuration space of an application so as to 
find defects:


http://www.cs.umd.edu/~aporter/Docs/skollJournal.pdf
http://www.cs.umd.edu/~aporter/MemonPorterSkoll.ppt

What you need is something like that applied to the hadoop config space 
and a set of tests that will shop up problems in a timely manner.


-steve


Copy-rate of reducers decreases over time

2008-12-10 Thread patek tek
Hello,
I have been running experiments with Hadoop and noticed that
the copy-rate of reducers descreases over time (even though there is the
same
number of mappers generating intermediate data).

Is there any obvious answer to this?

Looking forward to your reply!

patektek


question: NameNode hanging on startup as it intends to leave safe mode

2008-12-10 Thread Karl Kleinpaste
We have a cluster comprised of 21 nodes holding a total capacity of
about 55T where we have had a problem twice in the last couple weeks on
startup of NameNode.  We are running 0.18.1.  DFS space is currently
just below the halfway point of actual occupation, about 25T.

Symptom is that there is normal startup logging on NameNode's part,
where it self-analyzes its expected DFS content, reports #files known,
and begins to accept reports from slaves' DataNodes about blocks they
hold.  During this time, NameNode is in safe mode pending adequate block
discovery from slaves.  As the fraction of reported blocks rises,
eventually it hits the required 0.9990 threshold and announces that it
will leave safe mode in 30 seconds.

The problem occurs when, at the point of logging 0 seconds to leave
safe mode, NameNode hangs: It uses no more CPU; it logs nothing
further; it stops responding on its port 50070 web interface; hadoop
fs commands report no contact with NameNode; netstat -atp shows a
number of open connections on 9000 and 50070, indicating the connections
are being accepted, but NameNode never processes them.

This has happened twice in the last 2 weeks and it has us fairly
concerned.  Both times, it has been adequate simply to start over again,
and NameNode successfully comes to life the 2nd time around.  Is anyone
else familiar with this sort of hang, and do you know of any solutions?



File Splits in Hadoop

2008-12-10 Thread amitsingh

Hi,

I am stuck with some questions based on following scenario.

1) Hadoop normally splits the input file and distributes the splits 
across slaves(referred to as Psplits from now), in to chunks of 64 MB.
a) Is there Any way to specify split criteria  so for example a huge 4 
GB file is split in to 40 odd files(Psplits) respecting record boundaries ?
b) Is it even required that these physical splits(Psplits) obey record 
boundaries ?


2) We can get locations of these Psplits on HDFS as follows
BlockLocation[] blkLocations = fs.getFileBlockLocations(file, 0,  
length); //FileInputFormat line 273
In FileInputFormat, for each blkLocations(Psplit) multiple logical 
splits(referred to as Lsplits from now) are created based on hueristic 
for number of mappers.


Q) How is following situation handled in TextInputFormat which reads 
line by line,

   i) Input File is split as described in step 1 in more than 2 parts
   ii) Suppose there is a line of text which starts near end of 
Psplit-i and end in Psplit-i+1 (say Psplit2 and Psplit3)
   iii) Which mapper gets this line spanning multiple Psplits(mapper_i 
or mapper_i+1)
   iv) I went through the FileInputFormat code, Lsplits are done only 
for a particular pSplit not across pSplit. Why so ?


Q) In short, If one has to read arbitary objects(not line), how does one 
handle records which are partially in one PSplit and partially in other.


--Amit





Re: question: NameNode hanging on startup as it intends to leave safe mode

2008-12-10 Thread Konstantin Shvachko

This is probably related to HADOOP-4795.
http://issues.apache.org/jira/browse/HADOOP-4795

We are testing it on 0.18 now. Should be committed soon.
Please let know if it is something else.

Thanks,
--Konstantin

Karl Kleinpaste wrote:

We have a cluster comprised of 21 nodes holding a total capacity of
about 55T where we have had a problem twice in the last couple weeks on
startup of NameNode.  We are running 0.18.1.  DFS space is currently
just below the halfway point of actual occupation, about 25T.

Symptom is that there is normal startup logging on NameNode's part,
where it self-analyzes its expected DFS content, reports #files known,
and begins to accept reports from slaves' DataNodes about blocks they
hold.  During this time, NameNode is in safe mode pending adequate block
discovery from slaves.  As the fraction of reported blocks rises,
eventually it hits the required 0.9990 threshold and announces that it
will leave safe mode in 30 seconds.

The problem occurs when, at the point of logging 0 seconds to leave
safe mode, NameNode hangs: It uses no more CPU; it logs nothing
further; it stops responding on its port 50070 web interface; hadoop
fs commands report no contact with NameNode; netstat -atp shows a
number of open connections on 9000 and 50070, indicating the connections
are being accepted, but NameNode never processes them.

This has happened twice in the last 2 weeks and it has us fairly
concerned.  Both times, it has been adequate simply to start over again,
and NameNode successfully comes to life the 2nd time around.  Is anyone
else familiar with this sort of hang, and do you know of any solutions?




Libhdfs / fuse_dfs crashing

2008-12-10 Thread Brian Bockelman

Hey,

In Hadoop-0.19.0, we've been getting crashing, deadlocking, and other  
badness from libhdfs (I think: I'm using it through fuse-dfs).


https://issues.apache.org/jira/browse/HADOOP-4775

However, I've been at a complete loss to make any progress in  
debugging.  The problem happens consistently in our workflows, even  
though it's been problematic to find a simple test case (I suspect the  
issues are triggered by threading, while debug mode eliminates any  
threading!).  It doesn't appear there are any nice ways to debug or  
follow along with actions in fuse_dfs or libhdfs: I don't even know  
how to make DFSClient spill its guts to a log.


Help!

Brian


When I system.out.println() in a map or reduce, where does it go?

2008-12-10 Thread David Coe
I've noticed that if I put a system.out.println in the run() method I
see the result on my console.  If I put it in the map or reduce class, I
never see the result.  Where does it go?  Is there a way to get this
result easily (eg dump it in a log file)?

David


Re: When I system.out.println() in a map or reduce, where does it go?

2008-12-10 Thread Tarandeep Singh
you can see the output in hadoop log directory (if you have used default
settings, it would be $HADOOP_HOME/logs/userlogs)

On Wed, Dec 10, 2008 at 1:31 PM, David Coe [EMAIL PROTECTED] wrote:

 I've noticed that if I put a system.out.println in the run() method I
 see the result on my console.  If I put it in the map or reduce class, I
 never see the result.  Where does it go?  Is there a way to get this
 result easily (eg dump it in a log file)?

 David



Re: File Splits in Hadoop

2008-12-10 Thread Tarandeep Singh
On Wed, Dec 10, 2008 at 11:12 AM, amitsingh [EMAIL PROTECTED]wrote:

 Hi,

 I am stuck with some questions based on following scenario.

 1) Hadoop normally splits the input file and distributes the splits across
 slaves(referred to as Psplits from now), in to chunks of 64 MB.
 a) Is there Any way to specify split criteria  so for example a huge 4 GB
 file is split in to 40 odd files(Psplits) respecting record boundaries ?


you can set mapred.min.split.size in jobConf
you can set its value greater than block size and hence can force a split to
be larger than block size. However, this might result into splits having
data blocks that are not local.



 b) Is it even required that these physical splits(Psplits) obey record
 boundaries ?

 2) We can get locations of these Psplits on HDFS as follows
 BlockLocation[] blkLocations = fs.getFileBlockLocations(file, 0,  length);
 //FileInputFormat line 273
 In FileInputFormat, for each blkLocations(Psplit) multiple logical
 splits(referred to as Lsplits from now) are created based on hueristic for
 number of mappers.

 Q) How is following situation handled in TextInputFormat which reads line
 by line,
   i) Input File is split as described in step 1 in more than 2 parts
   ii) Suppose there is a line of text which starts near end of Psplit-i and
 end in Psplit-i+1 (say Psplit2 and Psplit3)
   iii) Which mapper gets this line spanning multiple Psplits(mapper_i or
 mapper_i+1)
   iv) I went through the FileInputFormat code, Lsplits are done only for a
 particular pSplit not across pSplit. Why so ?

 Q) In short, If one has to read arbitary objects(not line), how does one
 handle records which are partially in one PSplit and partially in other.


I am working on this as well and not found exact answer, but in my view
mapper_i should handle the line / record which is partially in one split and
partially in other split. The mapper_i+1 should first seek beginning of new
record (line in this case) and start processing from there.

Someone from Hadoop core team please correct me if this is wrong and fill in
details.

Thanks,
Taran


 --Amit






Re: When I system.out.println() in a map or reduce, where does it go?

2008-12-10 Thread Ravion

Please check userlogs directory
- Original Message - 
From: David Coe [EMAIL PROTECTED]

To: core-user@hadoop.apache.org
Sent: Thursday, December 11, 2008 5:31 AM
Subject: When I system.out.println() in a map or reduce, where does it go?



I've noticed that if I put a system.out.println in the run() method I
see the result on my console.  If I put it in the map or reduce class, I
never see the result.  Where does it go?  Is there a way to get this
result easily (eg dump it in a log file)?

David


Re: When I system.out.println() in a map or reduce, where does it go?

2008-12-10 Thread Edward Capriolo
Also be careful when you do this. If you are running map/reduce on a
large file the map and reduce operations will be called many times.
You can end up with a lot of output. Use log4j instead.


hadoop mapper 100% but cannot complete?

2008-12-10 Thread hc busy
Guys, I've just configured a hadoop cluster for the first time, and I'm
running a null map-reduction over the streaming interface. (/bin/cat for
both map and reducer). So I noticed that the mapper and reducer complete
100% in the web ui within a reasonable amount of time, but the job does not
complete. On command line it displays

...INFO streaming.StreamJob: map 100% reduce 100%

In the web ui, it shows map completion graph is 100%, but does not display a
reduce completion graph. The four machines are well equiped to handle the
size of data (30gb). Looking at the task tracker on each of the machines, I
noticed that it is ticking down the percents very very slowly:

2008-12-10 16:18:55,265 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_02_0 46.684883% Records R/W=149326846/149326834
 reduce
2008-12-10 16:18:57,055 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_06_0 47.566963% Records R/W=151739348/151739342
 reduce
2008-12-10 16:18:58,268 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_02_0 46.826576% Records R/W=149326846/149326834
 reduce
2008-12-10 16:19:00,058 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_06_0 47.741756% Records R/W=153377016/153376990
 reduce
2008-12-10 16:19:01,271 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_02_0 46.9636% Records R/W=149326846/149326834 
reduce
2008-12-10 16:19:03,061 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_06_0 47.94259% Records R/W=153377016/153376990
 reduce
2008-12-10 16:19:04,274 INFO org.apache.hadoop.mapred.TaskTracker:
task_200812101532_0001_r_02_0 47.110992% Records R/W=150960648/150960644
 reduce

so it would continue like this for hours and hours. What buffer am I setting
too small, or what could possiblly make it go so slow?? I've worked on
hadoop clusters before and it had always performed great on similar sized or
larger data sets, so I suspect it's just a configuration some where that is
making it do this?

thanks in advance.


reply: When I system.out.println() in a map or reduce, where does it go?

2008-12-10 Thread koven
The best method is find them in web.
If hadoop-default.xml,find it
property
  namemapred.job.tracker.http.address/name
  value0.0.0.0:50030/value
  description
The job tracker http server address and port the server will listen on.
If the port is 0 then the server will start on a free port.
  /description
/property
If you open the port 50030,then you can check the System.out.println() at
the web




DistributedCache staleness

2008-12-10 Thread Anthony Urso
I have been having problems with changes to DistributedCache files on
HDFS not being reflected on subsequently run jobs.  I can change the
filename to work around this, but I would prefer a way to invalidate
the Cache when neccesary.

Is there a way to lower the timeout or flush the Cache?

Cheers,
Anthony


Re: Re: File Splits in Hadoop

2008-12-10 Thread amitsingh

Thanks for discussion Taran,

The problem still persists.
What should be done if i have a record which spans multiple PSplits 
(physcial splits on HDFS)?

What happens if  we try to read beyond a pSplit?
Is the next read transparently done from records corresponding to next  
block for the same file (might not be on the same machine) or

next block (may not be of the same file) from the local disk is read.

If its former i guess things should have worked fine (surprisingly they 
arent !! i m goofing  it up somewhere).
If its latter then i have no idea how to tackle this. (Any help would be 
highly appreciated)




**

I Tried running a simple program where in I created a sample GZip file 
by serailizing records

  // serialize the objects sarah and sam
  FileOutputStream fos = new 
FileOutputStream(/home/amitsingh/OUTPUT/out.bin);

  GZIPOutputStream gz = new GZIPOutputStream(fos);
  ObjectOutputStream oos = new ObjectOutputStream(gz);

  for (int i = 0; i  50; i++) {
  Employee sam = new Employee(i + name, i,   i + 5);
   // 3 fields , 2 int , 1 string
  oos.writeObject(sam);
  }
  oos.flush();
  oos.close();

Now if i just run a simple map reduce on this binary file, it gives 
exception java.io.EOFException: Unexpected end of ZLIB input stream

It creates 2 splits
Split 1: hdfs://localhost:54310/user/amitsingh/out1: start:0 
length:1555001 hosts: sandpiper ,bytesRemaining: 1555001
Split 2:  hdfs://localhost:54310/user/amitsingh/out1: start1555001 
length:1555001 hosts: sandpiper ,


For Map1-- Split1 i get java.io.EOFException: Unexpected end of ZLIB 
input stream [for startLens[0]  start:0len1556480]

For Map2-- No valid GZip is found as startLens is empty

I am not sure why in Map1 len1556480 and not 3110002(entire file) as  
there is ONLY one GZip and thats the entire file.

Any guidance would be of great help ??







**
Source code
**

package org.apache.hadoop.mapred;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.zip.GZIPInputStream;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.util.ReflectionUtils;
import org.apache.hadoop.util.StringUtils;

public class CustomGzipRecordReader implements
   RecordReaderText, BytesWritable {

   public static final Log LOG = LogFactory
   .getLog(CustomGzipRecordReader.class);

   protected Configuration conf;
   protected long splitStart = 0;
   protected long pos = 0;
   protected long splitEnd = 0;
   protected long splitLen = 0;
   protected long fileLen = 0;
   protected FSDataInputStream in;
   protected int recordIndex = 0;
   protected long[][] startLens;
   protected byte[] buffer = new byte[4096];

   private static byte[] MAGIC = { (byte) 0x1F, (byte) 0x8B };

   //chech the split and populate startLens indicating at which all 
offset a Zlib file starts in this split

   private void parseArcBytes() throws IOException {

   long totalRead = in.getPos();
   byte[] buffer = new byte[4096];
   ListLong starts = new ArrayListLong();

   int read = -1;
   while ((read = in.read(buffer))  0) {

   for (int i = 0; i  (read - 1); i++) {

   if ((buffer[i] == (byte) 0x1F)
(buffer[i + 1] == (byte) 0x8B)) {
   long curStart = totalRead + i;
   in.seek(curStart);
   byte[] zipbytes = null;
   try {
   zipbytes = new byte[32];
   in.read(zipbytes);
   ByteArrayInputStream zipin = new 
ByteArrayInputStream(

   zipbytes);
   GZIPInputStream zin = new GZIPInputStream(zipin);
   zin.close();
   zipin.close();
   starts.add(curStart);
   LOG.info(curStart:  + (curStart));
   } catch (Exception e) {
   LOG.info(Ignoring position:  + (curStart));
   continue;
   }
   }
   }

   totalRead += read;
   in.seek(totalRead);
   if (totalRead  splitEnd) {
   break;
   }
   }