Github user rdblue commented on the issue:

    https://github.com/apache/spark/pull/22281
  
    Thanks for working on this, @HyukjinKwon. I think it's great that this is 
getting the conversation started. I agree with @cloud-fan that we should think 
through how we want v2 to work.
    
    The read path is fairly straight-forward, we just need to make sure the 
`ResolveRelations` analyzer rule returns a v2 relation. There are some details 
to talk through here as well when we think about [multi-catalog 
support](https://github.com/apache/spark/pull/21306), but the difficulty is 
mostly on the write side.
    
    For SQL operations that result in writes, we want to use the new v2 logical 
plans. First, we need to add those and define exactly what those plans do so 
that we have documented and reliable behavior; I have a few open PRs for this. 
Then, I think we need a set of rules to convert from the plans produced by the 
parser (`CreateTable` with a `query`) to the new logical plans 
(`CreateTableAsSelect`) if the relation resolves to a v2 relation.
    
    For v2 table creation (`CreateTable` without a `query`), I think the 
current USING syntax would work as it does today. I'd like to move this to the 
new catalog API when we add it, but I think that is mostly orthogonal.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to