[jira] [Assigned] (SPARK-14832) Refactor DataSource to ensure schema is inferred only once when creating a file stream

2016-04-21 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-14832:


Assignee: Apache Spark  (was: Tathagata Das)

> Refactor DataSource to ensure schema is inferred only once when creating a 
> file stream
> --
>
> Key: SPARK-14832
> URL: https://issues.apache.org/jira/browse/SPARK-14832
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL, Streaming
>Reporter: Tathagata Das
>Assignee: Apache Spark
>
> When creating a file stream using sqlContext.write.stream(), existing files 
> are scanned twice for finding the schema 
> - Once, when creating a DataSource + StreamingRelation in the 
> DataFrameReader.stream()
> - Again, when creating streaming Source from the DataSource, in 
> DataSource.createSource()
> Instead, the schema should be generated only once, at the time of creating 
> the dataframe, and when the streaming source is created, it should just reuse 
> that schema



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-14832) Refactor DataSource to ensure schema is inferred only once when creating a file stream

2016-04-21 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-14832:


Assignee: Tathagata Das  (was: Apache Spark)

> Refactor DataSource to ensure schema is inferred only once when creating a 
> file stream
> --
>
> Key: SPARK-14832
> URL: https://issues.apache.org/jira/browse/SPARK-14832
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL, Streaming
>Reporter: Tathagata Das
>Assignee: Tathagata Das
>
> When creating a file stream using sqlContext.write.stream(), existing files 
> are scanned twice for finding the schema 
> - Once, when creating a DataSource + StreamingRelation in the 
> DataFrameReader.stream()
> - Again, when creating streaming Source from the DataSource, in 
> DataSource.createSource()
> Instead, the schema should be generated only once, at the time of creating 
> the dataframe, and when the streaming source is created, it should just reuse 
> that schema



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org