Or you could use sinks like elasticsearch.
Regards,
Noorul
On Mon, Mar 6, 2017 at 10:52 AM, devjyoti patra wrote:
> Timothy, why are you writing application logs to HDFS? In case you want to
> analyze these logs later, you can write to local storage on your slave nodes
> and
Timothy, why are you writing application logs to HDFS? In case you want to
analyze these logs later, you can write to local storage on your slave
nodes and later rotate those files to a suitable location. If they are only
going to useful for debugging the application, you can always remove them
I'm running a single worker EMR cluster for a Structured Streaming job. How
do I deal with my application log filling up HDFS?
/var/log/spark/apps/application_1487823545416_0021_1.inprogress
is currently 21.8 GB
*Sent with Shift