GitHub user JDrit opened a pull request:

    https://github.com/apache/spark/pull/7802

    [SPARK-9486][SQL][WIP] Add data source aliasing for external packages

    Users currently have to provide the full class name for external data 
sources, like:
    
    `sqlContext.read.format("com.databricks.spark.avro").load(path)`
    
    This allows external data source packages to register themselves using a 
Service Loader so that they can add custom alias like:
    
    `sqlContext.read.format("avro").load(path)`
    
    This makes it so that using external data source packages uses the same 
format as the internal data sources like parquet, json, etc.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/JDrit/spark service_loader

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/7802.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #7802
    
----
commit 946186e3f17ddcc54acf2be1a34aebf246b06d2f
Author: Joseph Batchik <joseph.batc...@cloudera.com>
Date:   2015-07-30T18:28:17Z

    started working on service loader

commit 208a2a854218069ff484e78ad13423baff21ffc6
Author: Joseph Batchik <joseph.batc...@cloudera.com>
Date:   2015-07-30T20:32:57Z

    changes to do error catching if there are multiple data sources

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to