What exception would I expect to get if this limit was exceeded?
john

From: Harsh J [mailto:[email protected]]
Sent: Monday, January 27, 2014 8:12 AM
To: <[email protected]>
Subject: Re: HDFS open file limit


Hi John,

There is a concurrent connections limit on the DNs that's set to a default of 
4k max parallel threaded connections for reading or writing blocks. This is 
also expandable via configuration but usually the default value suffices even 
for pretty large operations given the replicas help spread read load around.

Beyond this you will mostly just run into configurable OS limitations.
On Jan 26, 2014 11:03 PM, "John Lilley" 
<[email protected]<mailto:[email protected]>> wrote:
I have an application that wants to open a large set of files in HDFS 
simultaneously.  Are there hard or practical limits to what can be opened at 
once by a single process?  By the entire cluster in aggregate?
Thanks
John


Reply via email to