urself to include it.
>
> Wei-Chiu Chuang
>
> On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar
> wrote:
>
> Hello guys,
>
> I have a job that reads compressed (Snappy) data but when I run the job,
> it is throwing an error "native snappy library not available: this
Hello guys,
I have a job that reads compressed (Snappy) data but when I run the job, it
is throwing an error "native snappy library not available: this version
of libhadoop was built without snappy support".
.
I followed this instruction but it did not resolve the issue:
https://community.hortonwo
ents.
Map-Reduce Framework:
Map input records=11
Any idea what's going on?
On 27 January 2015 at 08:30, Azuryy Yu wrote:
> Are you sure you can 'cat' the lastest batch of the data on HDFS?
> for Flume, the data is available only after file rolled, because Flume
> only c
I have a Flume which stream data into HDFS sink (appends to same file),
which I could "hdfs dfs -cat" and see it from HDFS. However, when I run
MapReduce job on the folder that contains appended data, it only picks up
the first batch that was flushed (bacthSize = 100) into HDFS. The rest are
not be