I would not expect any of the things that you mention. A cache is not supposed 
to slow down writing. This does not make sense from my point of view. Splitting 
a block into several smaller ones is also not feasible. The data has to go 
somewhere before splitting. 

I think what you refer to is certain cache eviction strategies.
1 GB of cache sounds small for a HDFS cache.
I suggest to enable the default configuration of ignite on HDFS and then change 
it step by step to your envisioned configuration.

That being said, a Hadoop platform with a lot of ecosystem components can be 
complex, in particular you need to calculate that each of the components (hive, 
spark etc) has certain memory assigned or has it used when jobs are running. So 
even if you have configured 1 gb somebody else might have taken it. Less 
probable but possible is that your JDK has a bug leading to OOME. You may also 
try to upgrade it.

> On 14. Apr 2017, at 08:12, <zhangshuai.u...@gmail.com> 
> <zhangshuai.u...@gmail.com> wrote:
> 
> I think it's a kind of misconfiguration. The Ignite document just mentioned 
> about how to configuration HDFS as a secondary filesystem but nothing about 
> how to restrict the memory usage to avoid OOME. 
> https://apacheignite.readme.io/v1.0/docs/igfs-secondary-file-system
> 
> Assume I configured the max JVM heap size to 1GB.
> 1. What would happen if I write very fast before Ignite write data to HDFS 
> asynchronized?
> 2. What would happen if I want to write a 2GB file block to Ignite?
> 
> I expected:
> 1. Ignite would slow down the write performance to avoid OOME.
> 2. Ignite would break the 2GB file block into 512MB blocks & write them to 
> HDFS to avoid OOME.
> 
> Do we have configurations against above behaviors? I dig some items from 
> source code & Ignite Web Console, but seems they are not working fine. 
> 
> <property name="fragmentizerConcurrentFiles" value="3"/>
> <property name="dualModeMaxPendingPutsSize" value="10"/>
> <property name="blockSize" value="536870912"/>
> <property name="streamBufferSize" value="131072"/>
> <property name="maxSpaceSize" value="6442450944"/>
> <property name="maximumTaskRangeLength" value="536870912"/>
> <property name="prefetchBlocks" value="2"/>
> <property name="sequentialReadsBeforePrefetch" value="5"/>
> <property name="defaultMode" value="DUAL_ASYNC" />
> 
> I also notice that Ignite write through file block size is set to 64MB. I 
> mean I write a file to Ignite with block size to 4GB, but I finally found it 
> on HDFS with block size 64MB. Is there any configuration for it?
> 
> -----Original Message-----
> From: dkarachentsev [mailto:dkarachent...@gridgain.com] 
> Sent: Thursday, April 13, 2017 11:21 PM
> To: user@ignite.apache.org
> Subject: Re: OOM when using Ignite as HDFS Cache
> 
> Hi Shuai,
> 
> Could you please take heap dump on OOME and find what objects consume memory? 
> There would be a lot of byte[] objects, please find the nearest GC root for 
> them.
> 
> Thanks!
> 
> -Dmitry.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p11956.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 

Reply via email to