Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/1759#issuecomment-51810035
  
    I was planning to query the database or file at compile time for these 
sorts of data sources.  While you are right that this is less `deterministic`, 
its not clear to me that it is desirable to have a compiler that 
deterministically allows you to write programs that don't line up with the 
schema of the data.  If the schema changes and my program is now invalid, I 
want the compilation to fail!
    
    Another note: this is not intended as the only interface to Spark SQL, and 
I think we should plan to support the less magical interface long term for 
cases where determining the schema at compile time is not feasible.
    
    Finally, I think the time when this functionality will be the most useful 
is in the interactive Spark Shell.  In these cases you want the code to be as 
concise as possible, and the line between "compilation" and "execution" is 
pretty blurry already.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to