Hello,
We are occasionally seeing that due to some reason (we are using a scribe 
client) some files stay open for write even after the writing process has long 
died. Is there a way on the HDFS side that we can do to flush and close these 
files without having to restart the namenode?
Is this a problem with 0.20 and fixed in 0.21?
 -Ayon
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.

Reply via email to