If your file is bigger than a block size (typically 64mb or 128mb), then it
will be split into more than one block. The blocks may or may not be stored on
different datanodes. If you're using a default InputFormat, then the input will
be split between two task. Since you said you need the whole
Date: Tue, 22 Nov 2011 03:08:20 +
From: mahesw...@huawei.com
Subject: RE: Regarding loading a big XML file to HDFS
To: common-user@hadoop.apache.org; core-u...@hadoop.apache.org
Also i am surprising, how you are writing mapreduce application here. Map and
reduce will work with key value pairs
: RE: Regarding loading a big XML file to HDFS
__
From: hari708 [hari...@gmail.com]
Sent: Tuesday, November 22, 2011 6:50 AM
To: core-u...@hadoop.apache.org
Subject: Regarding loading a big XML file to HDFS
Hi,
I have a big file consisting of XML data.the XML
that make sense?
-Mike
Date: Tue, 22 Nov 2011 03:08:20 +
From: mahesw...@huawei.com
Subject: RE: Regarding loading a big XML file to HDFS
To: common-user@hadoop.apache.org; core-u...@hadoop.apache.org
Also i am surprising, how you are writing mapreduce application here. Map and
reduce
: RE: Regarding loading a big XML file to HDFS
To: common-user@hadoop.apache.org; core-u...@hadoop.apache.org
Also i am surprising, how you are writing mapreduce application here.
Map and reduce will work with key value pairs.
From: Uma