GitHub user xuchuanyin opened a pull request:

    https://github.com/apache/carbondata/pull/2443

    [CARBONDATA-2685][DataMap]  Parallize datamap rebuild processing for 
segments

    Currently in carbondata, while rebuilding datamap, one spark job will be
    started for each segment and all the jobs are executed serailly. If we
    have many historical segments, the rebuild will takes a lot of time.
    
    Here we optimize the procedure for datamap rebuild and start one start
    for each segments, all the tasks can be done in parallel in one spark
    job.
    
    Be sure to do all of the following checklist to help us incorporate 
    your contribution quickly and easily:
    
     - [ ] Any interfaces changed?
     
     - [ ] Any backward compatibility impacted?
     
     - [ ] Document update required?
    
     - [ ] Testing done
            Please provide details on 
            - Whether new unit test cases have been added or why no new tests 
are required?
            - How it is tested? Please attach test report.
            - Is it a performance related change? Please attach the performance 
test report.
            - Any additional information to help reviewers in testing this 
change.
           
     - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/xuchuanyin/carbondata 
CARBONDATA-2685_parallelize_datamap_rebuild

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/2443.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2443
    
----
commit 1e2c068d9539b4b749d62392462f719c02295cd8
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-06-29T14:23:55Z

    Fix bugs in bloomfilter for dictionary/sort/date index columns
    
    For dictionary column, carbon convert literal value to dict value, then
    convert dict value to mdk value, at last it stores the mdk value as
    internal value in carbonfile.
    For sort column and date column, the value has also been encoded.
    
    Here in bloomfilter datamap, we will index on the encoded data, that is
    to say:
    For dictionary/date column, we use the surrogate key as bloom index key;
    For sort column and ordinary dimensions, we use the plain bytes as bloom
    index key;
    For measures, we convert the value to bytes and use it as the bloom
    index key.
    
    Changes are made:
    
    1. FieldConverters were refactored to extract common value convert methods.
    2. BloomQueryModel was optimized to support converting literal value to
    internal value.
    2. fix bugs for int/float/date/timestamp as bloom index column
    3. fix bugs in dictionary/sort column as bloom index column
    4. add tests
    5. block (deferred) rebuild for bloom datamap (contains bugs that does
    not fix in this commit, another PR has been raised)

commit f75afe9d380aa3f6821a1c2eda666da54b0d437d
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-06-29T15:27:55Z

    fix review comments

commit d51c528af6aa363298f5a188d794ba76939bd942
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-06-30T03:17:33Z

    Fix bugs in querying on bloom column with empty value
    
    Convert null values to corresponding values while querying on bloom
    column

commit 9f75de4bf3dab4c991c61a72b5c2104073f044ae
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-06-30T03:28:07Z

    Add test for querying on longstring bloom index column
    
    Supporting longstring as bloom index column has already been done in
    PR2403, here we only add test for it

commit a4a6c60303f6542691ef6d230c141458965ff8e0
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-06-30T09:10:04Z

    Fix bugs for deferred rebuild for bloomfilter datamap
    
    Previously when we implement ISSUE-2633, deferred rebuild for bloom
    datamap is disabled for bloomfilter adtamap due to unhandled bugs. In
    this commit, we fixed the bugs and brought this feature back.
    
    Methods are extracted to reduce duplicate codes.

commit 2e9c703797a199a8d957d37429386651d22e197d
Author: xuchuanyin <xuchuanyin@...>
Date:   2018-07-04T04:03:18Z

    Parallize datamap rebuild processing for segments
    
    Currently in carbondata, while rebuilding datamap, one spark job will be
    started for each segment and all the jobs are executed serailly. If we
    have many historical segments, the rebuild will takes a lot of time.
    
    Here we optimize the procedure for datamap rebuild and start one start
    for each segments, all the tasks can be done in parallel in one spark
    job.

----


---

Reply via email to