Weird because after restart and just deleting the
work/db/node-xxxxxx/cache-my-cache folder on the node that shutdown. It
started up fine and I have the same amount of caches...
And sudo lsof -p -a PID returns only 3600 files

On Fri, Dec 16, 2022 at 12:10 AM Gianluca Bonetti <
[email protected]> wrote:

> Hello
>
> I had the same problem, with far more caches in use, as total number (but
> each cache was very small in size).
>
> 32768 files is definitely too low.
> In my ase, I had to raise it to 262144 hard limit and 131072 soft limit.
> Please update your /etc/security/limits.conf records for the user you run
> your app with.
>
> I also raised fs.file-max to 2097152 which may be excessive, but I don't
> see a problem with setting it that high.
>
> Cheers
> Gianluca
>
> On Fri, 16 Dec 2022 at 01:39, John Smith <[email protected]> wrote:
>
>> Hi it seems the JVM was forcefully shutdown when I tried to create a new
>> partitioned cache.
>>
>> The error seems to indicate that it was "too many files" can someone from
>> Ignite confirm this?
>>
>> I have checked with lsof and Ignite only has about 3600 files open. It's
>> the only service running on that server. So I don't see how this could
>> happen? I have a total of 10 caches mixed between replicated and
>> partitioned (1 backup) over 3 nodes.
>>
>> I have
>>
>> fs.file-max:300000
>> and
>> - soft    nofile          32768
>> - hard    nofile          32768
>> respectively on each node.
>>
>> What I did was delete the db/folder for that specific cache on that node
>> and when I restarted it. It worked and recreated the folder for that cache.
>>
>> https://www.dropbox.com/s/zwf28akser9p4dt/ignite-XXXXXX.0.log?dl=0
>>
>

Reply via email to