Github user sureshthalamati commented on the issue:

    https://github.com/apache/spark/pull/16209
  
    @gatorsmile I like the DDL schema format approach. But the method 
`CatalystSqlParser.parseTableSchema(sql)`  will work only if user wants to 
specify the target database datatype that also exists in Spark. For example if 
user wants to specify CLOB(200K) ; it will not work because that is not a valid 
data type in spark. 
    
    How about simple comma separate list with restriction of ,(comma) can not 
be in the column name to use this option ?. I am guessing that would work in 
most of the  scenarios.
    
    Any suggestions ?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to