[ 
https://issues.apache.org/jira/browse/SPARK-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14303982#comment-14303982
 ] 

Yin Huai commented on SPARK-5420:
---------------------------------

h3. End user APIs added to SQLContext (load related)
h4. Load data through a data source and create a DataFrame
{code}
// This method is used to load data through file based data source (e.g. 
Parquet). We will use the default data source . Right now, it is Parquet.
def load(path: String): DataFrame
def load(
      dataSourceName: String,
      option: (String, String),
      options: (String, String)*): DataFrame
// This is for Java users.
def load(
      dataSourceName: String,
      options: java.util.Map[String, String]): DataFrame
{code}

h3. End user APIs added to HiveContext (load related)
h4. Create a metastore table for the existing data
{code}
// This method is used create a table from a file based data source.  We will 
use the default data source . Right now, it is Parquet.
def createTable(tableName: String, path: String, allowExisting: Boolean): Unit
def createTable(
      tableName: String,
      dataSourceName: String,
      allowExisting: Boolean,
      option: (String, String),
      options: (String, String)*): Unit
def createTable(
      tableName: String,
      dataSourceName: String,
      schema: StructType,
      allowExisting: Boolean,
      option: (String, String),
      options: (String, String)*): Unit
// This one is for Java users.
def createTable(
      tableName: String,
      dataSourceName: String,
      allowExisting: Boolean,
      options: java.util.Map[String, String]): Unit
// This one is for Java users.
def createTable(
      tableName: String,
      dataSourceName: String,
      schema: StructType,
      allowExisting: Boolean,
      options: java.util.Map[String, String]): Unit
{code} 

> Cross-langauge load/store functions for creating and saving DataFrames
> ----------------------------------------------------------------------
>
>                 Key: SPARK-5420
>                 URL: https://issues.apache.org/jira/browse/SPARK-5420
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Patrick Wendell
>            Assignee: Yin Huai
>            Priority: Blocker
>             Fix For: 1.3.0
>
>
> We should have standard API's for loading or saving a table from a data 
> store. Per comment discussion:
> {code}
> def loadData(datasource: String, parameters: Map[String, String]): DataFrame
> def loadData(datasource: String, parameters: java.util.Map[String, String]): 
> DataFrame
> def storeData(datasource: String, parameters: Map[String, String]): DataFrame
> def storeData(datasource: String, parameters: java.util.Map[String, String]): 
> DataFrame
> {code}
> Python should have this too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to