[ 
https://issues.apache.org/jira/browse/CARBONDATA-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15569219#comment-15569219
 ] 

ASF GitHub Bot commented on CARBONDATA-297:
-------------------------------------------

Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/229#discussion_r83049479
  
    --- Diff: 
processing/src/main/java/org/apache/carbondata/processing/newflow/DataLoadProcessorStep.java
 ---
    @@ -0,0 +1,40 @@
    +package org.apache.carbondata.processing.newflow;
    +
    +import java.util.Iterator;
    +
    +import 
org.apache.carbondata.processing.newflow.exception.CarbonDataLoadingException;
    +
    +/**
    + * This base interface for data loading. It can do transformation jobs as 
per the implementation.
    + *
    + */
    +public interface DataLoadProcessorStep {
    +
    +  /**
    +   * The output meta for this step. The data returns from this step is as 
per this meta.
    +   * @return
    +   */
    +  DataField[] getOutput();
    +
    +  /**
    +   * Intialization process for this step.
    +   * @param configuration
    +   * @param child
    +   * @throws CarbonDataLoadingException
    +   */
    +  void intialize(CarbonDataLoadConfiguration configuration, 
DataLoadProcessorStep child) throws
    +      CarbonDataLoadingException;
    +
    +  /**
    +   * Tranform the data as per the implemetation.
    +   * @return Iterator of data
    +   * @throws CarbonDataLoadingException
    +   */
    +  Iterator<Object[]> execute() throws CarbonDataLoadingException;
    --- End diff --
    
    For suppose if we are loading 50GB of csv files and each HDFS block size is 
256MB then total number of partitions are 200. If we allow one task per 
partition then it would be 200 tasks. In carbondata one btree is created for 
each task. So if we allow all 200 tasks then it would be massively 200 btrees 
and it is not effective both in performance and memory wise. 
    That is the reason why we pool multiple blocks per task in the current 
kettle implementation. And these blocks are processed parallely. We can take 
the same way and use iterator for each thread and returns array of iterator.
    
    What do you mean by datanode-scope sorting? how to synchronize between 
multiple tasks?


> 2. Add interfaces for data loading.
> -----------------------------------
>
>                 Key: CARBONDATA-297
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-297
>             Project: CarbonData
>          Issue Type: Sub-task
>            Reporter: Ravindra Pesala
>            Assignee: Ravindra Pesala
>             Fix For: 0.2.0-incubating
>
>
> Add the major interface classes for data loading so that the following jiras 
> can use this interfaces to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to