hi,all :
   I'm using kylin-2.6.1-cdh57, and the source row count is 500 million,I can 
success build cube . 
   but when I use the cube planner , it has one step : Build Cube In-Mem for 
job :OPTIMIZE CUBE
   the config about the kylin_job_conf_inmem.xml is :

   <property>
        <name>mapreduce.map.memory.mb</name>
        <value>9216</value>
        <description></description>
    </property>

    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx8192m -XX:OnOutOfMemoryError='kill -9 %p'</value>
        <description></description>
    </property>

    <property>
        <name>mapreduce.job.is-mem-hungry</name>
        <value>true</value>
    </property>
 
    <property>
        <name>mapreduce.job.split.metainfo.maxsize</name>
        <value>-1</value>
        <description>The maximum permissible size of the split metainfo file.
            The JobTracker won't attempt to read split metainfo files bigger 
than
            the configured value. No limits if set to -1.
        </description>
    </property>

    <property>
        <name>mapreduce.job.max.split.locations</name>
        <value>2000</value>
        <description>No description</description>
    </property>

    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>200</value>
        <description></description>
    </property>


    finally the map job will be killed for OnOutOfMemoryError  , but when I 
giev more mem for map job , I will get another error :  
java.nio.BufferOverflowException
      
    why kylin will run the job inmem ? how can I avoid it ?
       


2019-04-08


lk_hadoop 

Reply via email to