Typical best practices for Hadoop like that is to use Flume. You could also
have a central log aggregation server that your nodes log to (e.g., using
the gelf layout for graylog, or just a TCP server accepting json/xml log
messages), or you could log via the Kafka appender or similar for
distribute
Hello,
We have TCPSocketServer running on Edge node of a cluster and all other
data nodes send log events to the TCPSocketServer running on edge node. And
we are using standard routing to redirect log events to individual log
files.
We are planning to make our system highly available by adding mu