GitHub user jackylk opened a pull request:

    https://github.com/apache/incubator-carbondata/pull/174

    [CARBONDATA-257] Make CarbonData readable through Spark/MapReduce program

    User should be able to use SparkContext.newAPIHadoopFile to read CarbonData 
files
    For example:
    ```
        val input = sc.newAPIHadoopFile(s"${cc.storePath}/default/carbon1",
          classOf[CarbonInputFormat[Array[Object]]],
          classOf[Void],
          classOf[Array[Object]])
        val result = input.map(x => x._2.toList).collect
        result.foreach(x => println(x.mkString(", ")))
    ```
    
    In this PR, the INPUT_DIR in CarbonInputFormat job configuration is changed 
to table path instead of store path, since sc.newAPIHadoopFile will set it to 
the first input parameter (`path` indicating the table path)

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jackylk/incubator-carbondata inputformat

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-carbondata/pull/174.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #174
    
----
commit c67e5a91fc32484dea49ede62dbcea68ba11e98f
Author: jackylk <jacky.li...@huawei.com>
Date:   2016-09-18T23:47:55Z

    change INPUT_DIR to tablePath instead of storePath

commit f2858e247a9e968da3cd4121c31cc6f95456d804
Author: jackylk <jacky.li...@huawei.com>
Date:   2016-09-18T23:48:35Z

    add mapreduce example to read carbon files

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to