Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16685#discussion_r97619926
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
 ---
    @@ -722,14 +724,246 @@ object JdbcUtils extends Logging {
       }
     
       /**
    +   * Check whether a table exists in a given database
    +   *
    +   * @return True if the table exists.
    +   */
    +  @transient
    +  def checkTableExists(targetDb: String, tableName: String): Boolean = {
    +    val dbc: Connection = DriverManager.getConnection(targetDb)
    +    val dbm = dbc.getMetaData()
    +    // Check if the table exists. If it exists, perform an upsert.
    +    // Otherwise, do a simple dataframe write to the DB
    +    val tables = dbm.getTables(null, null, tableName, null)
    +    val exists = tables.next() // Returns false if next does not exist
    +    dbc.close()
    +    exists
    +  }
    +
    +  // Provide a reasonable starting batch size for database operations.
    +  private val DEFAULT_BATCH_SIZE: Int = 200
    +
    +  // Limit the number of database connections. Some DBs suffer when there 
are many open
    +  // connections.
    +  private val DEFAULT_MAX_CONNECTIONS: Int = 50
    --- End diff --
    
    Well, since Spark 2.1, we already provide the parm for limiting the max num 
of concurrent JDBC connection when inserting data to JDBC tables. The parm is 
`numPartitions`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to