Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16664#discussion_r100565522
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/util/QueryExecutionListener.scala 
---
    @@ -44,27 +44,50 @@ trait QueryExecutionListener {
        * @param qe the QueryExecution object that carries detail information 
like logical plan,
        *           physical plan, etc.
        * @param durationNs the execution time for this query in nanoseconds.
    -   *
    -   * @note This can be invoked by multiple different threads.
    +   * @param outputParams The output parameters in case the method is 
invoked as a result of a
    +   *                     write operation. In case of a read will be @see 
`None`
        */
       @DeveloperApi
    -  def onSuccess(funcName: String, qe: QueryExecution, durationNs: Long): 
Unit
    -
    +  def onSuccess(
    +      funcName: String,
    +      qe: QueryExecution,
    +      durationNs: Long,
    +      outputParams: Option[OutputParams]): Unit
       /**
        * A callback function that will be called when a query execution failed.
        *
        * @param funcName the name of the action that triggered this query.
        * @param qe the QueryExecution object that carries detail information 
like logical plan,
        *           physical plan, etc.
        * @param exception the exception that failed this query.
    +   * @param outputParams The output parameters in case the method is 
invoked as a result of a
    +   *                     write operation. In case of a read will be @see 
`None`
        *
        * @note This can be invoked by multiple different threads.
        */
       @DeveloperApi
    -  def onFailure(funcName: String, qe: QueryExecution, exception: 
Exception): Unit
    +  def onFailure(
    +      funcName: String,
    +      qe: QueryExecution,
    +      exception: Exception,
    +      outputParams: Option[OutputParams]): Unit
     }
     
    -
    +/**
    + * Contains extra information useful for query analysis passed on from the 
methods in
    + * @see `org.apache.spark.sql.DataFrameWriter` while writing to a 
datasource
    + * @param datasourceType type of data source written to like csv, parquet, 
json, hive, jdbc etc.
    + * @param destination path or table name written to
    + * @param options the map containing the output options for the underlying 
datasource
    + *                specified by using the @see 
`org.apache.spark.sql.DataFrameWriter#option` method
    + * @param writeParams will contain any extra information that the write 
method wants to provide
    + */
    +@DeveloperApi
    +case class OutputParams(
    --- End diff --
    
    Sorry arguments to this class seem to have been picked pretty randomly. Can 
you explain more why these parameters are picked?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to