Hi.

I have HDFS client and HDFS datanode running on same machine.

When I'm trying to access a dozen of files at once from the client, several
times in a row, I'm starting to receive the following errors on client, and
HDFS browse function.

HDFS Client: "Could not get block locations. Aborting..."
HDFS browse: "Too many open files"

I can increase the maximum number of files that can opened, as I have it set
to the default 1024, but would like to first solve the problem, as larger
value just means it would run out of files again later on.

So my questions are:

1) Does the HDFS datanode keeps any files opened, even after the HDFS client
have already closed them?

2) Is it possible to find out, who keeps the opened files - datanode or
client (so I could pin-point the source of the problem).

Thanks in advance!

Reply via email to