Karl,

 since you did ask for alternatives,  people using MapR prefer to use the
NFS access to directly deposit data (or access it).  Works seamlessly from
all Linuxes, Solaris, Windows, AIX and a myriad of other legacy systems
without having to load any agents on those machines. And it is fully
automatic HA

Since compression is built-in in MapR, the data gets compressed coming in
over NFS automatically without much fuss.

Wrt to performance,  can get about 870 MB/s per node if you have 10GigE
attached (of course, with compression, the effective throughput will
surpass that based on how good the data can be squeezed).


On Fri, Apr 20, 2012 at 3:14 PM, Karl Hennig <khen...@baynote.com> wrote:

> I am investigating automated methods of moving our data from the web tier
> into HDFS for processing, a process that's performed periodically.
>
> I am looking for feedback from anyone who has actually used Flume in a
> production setup (redundant, failover) successfully.  I understand it is
> now being largely rearchitected during its incubation as Apache Flume-NG,
> so I don't have full confidence in the old, stable releases.
>
> The other option would be to write our own tools.  What methods are you
> using for these kinds of tasks?  Did you write your own or does Flume (or
> something else) work for you?
>
> I'm also on the Flume mailing list, but I wanted to ask these questions
> here because I'm interested in Flume _and_ alternatives.
>
> Thank you!
>
>

Reply via email to