Hi All,

Here's my current flume setup for a hadoop cluster to collect service logs

- Run flume agent in each of the nodes
- Configure flume sink to write to hdfs and the files end up in this way

..flume/events/node0logfile
..flume/events/node1logfile

..flume/events/nodeNlogfile

But I want to be able to write all the logs from multiple agents to a
single file in hdfs . How can I achieve this and what would the topology
look like.
can this be done via collector ? If yes, where can I run the collector and
how will this scale for a 1000+ node  cluster.

Thanks,
Yogendra

Reply via email to