Github user sureshthalamati commented on the pull request:

    https://github.com/apache/spark/pull/8676#issuecomment-139410536
  
    @rxin 
    
    Even if spark is running on jdk1.7,  customers using older version of 
drivers will run into AbstractMethodError  exception.   I think adding 
requirement for customers to use new drivers that implement getSchema() 
function will be unnecessary.
    
    After implementing the current approach I got curious on how the jdbc read 
functionality  finds the meta data and learned 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.resolveTable  also uses 
 s"SELECT * FROM $table WHERE 1=0"  to get column information.
    
    Alternative approach is to add getMetadataQuery(table:string) to the 
JdbcDialect interface that helps to  determine if table exists for write case , 
and column type information in the case of read  instead of 
getTableExistsQuery() as implemented  in the current pull request. It might be 
a milli second slower in the case of  write call for dialects that specify 
“select 1 from $table limit 1", instead of “select * from $table limit 
1”.  Advantage is one method to the interface will address both the cases.
    
    Any comments ? 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to