Hi,
You may want to have a look at the Flume project from Cloudera. I use it
for writing data into HDFS.
https://ccp.cloudera.com/display/SUPPORT/Downloads
dave
2012/2/6 Xiaobin She
> hi Bejoy ,
>
> thank you for your reply.
>
> actually I have set up an test cluster which has one namenode/jo
Can you give a little more details about your problem and hadoop setup? Is
this a single machine running NN, DN, etc., or is this a cluster?
On Tue, Apr 19, 2011 at 9:42 AM, endho wrote:
>
> hi ,
> Am having the same problem with the bin/hadoop dfs command. And I havent
> figured out the proble
ethods .. if i have a bunch of strings in my writable then what
> should be the read method implementation ..
>
> I really appreciate the help from all you guys ..
>
> On Wed, Feb 2, 2011 at 12:52 PM, David Sinclair <
> dsincl...@chariotsolutions.com> wrote:
>
> > So
.. instead of sending a
> bunch of delimited text I want to send an actual object to my reducer
>
> On Wed, Feb 2, 2011 at 12:33 PM, David Sinclair <
> dsincl...@chariotsolutions.com> wrote:
>
> > Are you storing your data as text or binary?
> >
> >
Are you storing your data as text or binary?
If you are storing as text, your mapper is going to get Keys of
type LongWritable and values of type Text. Inside your mapper you would
parse out the strings and wouldn't be using your custom writable; that is
unless you wanted your mapper/reducer to pr
d it turned out I was neglecting to close/flush my output.
>
>
> On 01/21/2011 01:04 PM, David Sinclair wrote:
>
>> Hi, I am seeing an odd problem when writing block compressed sequence
>> files.
>> If I write 400,000 records into a sequence file w/o compression, all 40
Hi, I am seeing an odd problem when writing block compressed sequence files.
If I write 400,000 records into a sequence file w/o compression, all 400K
end up in the file. If I write with block, regardless if it is bz2 or
deflate, I start losing records. Not a ton, but a couple hundred.
Here are th