[ https://issues.apache.org/jira/browse/SPARK-18209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629273#comment-15629273 ]
Nattavut Sutyanyong commented on SPARK-18209: --------------------------------------------- The challenge in Spark is the interweaving of Dataset/DataFrame APIs and SQL constructs. I gather from the discussions that 1. Any temporary objects (global views, local views, UDFs, etc.) which do not have its SQL definition should not be allowed to be referenced in persistent objects in SQL. Persistent objects are objects whose definitions are kept in a persistent storage, which can be retrieved in future Spark sessions, either by storing the definitions in an external metastore, or metastores. 2. The definitions of persistent objects should be stored as close to the semantics of the objects expressed by Spark users. For the scope of this JIRA, I agree with [~vssrinath]’s proposal to store the original SQL statement of the definition of the view, with minimal augmentation, such as the current database in the comment section, if deemed necessary. The expansion of the view definition will be done when the statement referencing the view is compiled. > More robust view canonicalization without full SQL expansion > ------------------------------------------------------------ > > Key: SPARK-18209 > URL: https://issues.apache.org/jira/browse/SPARK-18209 > Project: Spark > Issue Type: Improvement > Components: SQL > Reporter: Reynold Xin > Priority: Critical > > Spark SQL currently stores views by analyzing the provided SQL and then > generating fully expanded SQL out of the analyzed logical plan. This is > actually a very error prone way of doing it, because: > 1. It is non-trivial to guarantee that the generated SQL is correct without > being extremely verbose, given the current set of operators. > 2. We need extensive testing for all combination of operators. > 3. Whenever we introduce a new logical plan operator, we need to be super > careful because it might break SQL generation. This is the main reason > broadcast join hint has taken forever to be merged because it is very > difficult to guarantee correctness. > Given the two primary reasons to do view canonicalization is to provide the > context for the database as well as star expansion, I think we can this > through a simpler approach, by taking the user given SQL, analyze it, and > just wrap the original SQL with a SELECT clause at the outer and store the > database as a hint. > For example, given the following view creation SQL: > {code} > USE DATABASE my_db; > CREATE TABLE my_table (id int, name string); > CREATE VIEW my_view AS SELECT * FROM my_table WHERE id > 10; > {code} > We store the following SQL instead: > {code} > SELECT /*+ current_db: `my_db` */ id, name FROM (SELECT * FROM my_table WHERE > id > 10); > {code} > During parsing time, we expand the view along using the provided database > context. > (We don't need to follow exactly the same hint, as I'm merely illustrating > the high level approach here.) > Note that there is a chance that the underlying base table(s)' schema change > and the stored schema of the view might differ from the actual SQL schema. In > that case, I think we should throw an exception at runtime to warn users. > This exception can be controlled by a flag. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org