What exception would I expect to get if this limit was exceeded?
john
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Monday, January 27, 2014 8:12 AM
To:
Subject: Re: HDFS open file limit
Hi John,
There is a concurrent connections limit on the DNs that's set to a default of
4k max par
Hi John,
There is a concurrent connections limit on the DNs that's set to a default
of 4k max parallel threaded connections for reading or writing blocks. This
is also expandable via configuration but usually the default value suffices
even for pretty large operations given the replicas help sprea
There is no open file limitation for HDFS. The 'Too many open file' limit
is for OS file system. Increase *system-wide maximum number of open files,
Per-User/Group/Process file descriptor limits.*
On Mon, Jan 27, 2014 at 1:52 AM, Bertrand Dechoux wrote:
> At least for each machine, there is the
At least for each machine, there is the *ulimit *that need to be verified.
Regards
Bertrand
Bertrand Dechoux
On Sun, Jan 26, 2014 at 6:32 PM, John Lilley wrote:
> I have an application that wants to open a large set of files in HDFS
> simultaneously. Are there hard or practical limits to wh
I have an application that wants to open a large set of files in HDFS
simultaneously. Are there hard or practical limits to what can be opened at
once by a single process? By the entire cluster in aggregate?
Thanks
John