Yes, that's maybe the problem. The user max is set to 100.000 open files.
2015-07-06 15:55 GMT+02:00 Stephan Ewen :
> 4 mio file handles should be enough ;-)
>
> Is that the system global max, or the user's max? If the user's max us
> lower, this may be the issue...
>
> On Mon, Jul 6, 2015 at 3:5
4 mio file handles should be enough ;-)
Is that the system global max, or the user's max? If the user's max us
lower, this may be the issue...
On Mon, Jul 6, 2015 at 3:50 PM, Felix Neutatz
wrote:
> So do you know how to solve this issue apart from increasing the current
> file-max (4748198)?
>
So do you know how to solve this issue apart from increasing the current
file-max (4748198)?
2015-07-06 15:35 GMT+02:00 Stephan Ewen :
> I think the error is pretty much exactly in the stack trace:
>
> Caused by: java.io.FileNotFoundException:
> /data/4/hadoop/tmp/flink-io-0e2460bf-964b-4883-8eee
I think the error is pretty much exactly in the stack trace:
Caused by: java.io.FileNotFoundException:
/data/4/hadoop/tmp/flink-io-0e2460bf-964b-4883-8eee-12869b9476ab/
995a38a2c92536383d0057e3482999a9.000329.channel
(Too many open files in system)
On Mon, Jul 6, 2015 at 3:31 PM, Felix Neutatz
Hi,
I want to do some simple aggregations on 727 gz files (68 GB total) from
HDFS. See code here:
https://github.com/FelixNeutatz/wikiTrends/blob/master/extraction/src/main/scala/io/sanfran/wikiTrends/extraction/flink/Stats.scala
We are using a Flink-0.9 SNAPSHOT.
I get the following error:
Ca