[ 
https://issues.apache.org/jira/browse/SPARK-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074733#comment-15074733
 ] 

Maciej BryƄski commented on SPARK-11437:
----------------------------------------

There is such an API, but not public.

{code}
from pyspark.sql.types import _verify_type
{code}

> createDataFrame shouldn't .take() when provided schema
> ------------------------------------------------------
>
>                 Key: SPARK-11437
>                 URL: https://issues.apache.org/jira/browse/SPARK-11437
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: Jason White
>            Assignee: Jason White
>             Fix For: 1.6.0
>
>
> When creating a DataFrame from an RDD in PySpark, `createDataFrame` calls 
> `.take(10)` to verify the first 10 rows of the RDD match the provided schema. 
> Similar to https://issues.apache.org/jira/browse/SPARK-8070, but that issue 
> affected cases where a schema was not provided.
> Verifying the first 10 rows is of limited utility and causes the DAG to be 
> executed non-lazily. If necessary, I believe this verification should be done 
> lazily on all rows. However, since the caller is providing a schema to 
> follow, I think it's acceptable to simply fail if the schema is incorrect.
> https://github.com/apache/spark/blob/master/python/pyspark/sql/context.py#L321-L325



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to