Hi,

 

After a spark program completes, there are 3 temporary directories remain in
the temp directory.

The file names are like this: spark-2e389487-40cc-4a82-a5c7-353c0feefbb7

 

And the Spark program runs on Windows, a snappy DLL file also remains in the
temp directory.

The file name is like this:
snappy-1.0.4.1-6e117df4-97b6-4d69-bf9d-71c4a627940c-snappyjava

 

They are created every time the Spark program runs. So the number of files
and directories keeps growing.

 

How can let them be deleted?

 

Spark version is 1.3.1 with Hadoop 2.6.

 

Thanks.

 

 

Reply via email to