kevinjmh edited a comment on issue #3357: [CARBONDATA-3491] Return 
updated/deleted rows count when execute update/delete sql
URL: https://github.com/apache/carbondata/pull/3357#issuecomment-520737311
 
 
   I try and your test case pass.
   
   Modification content I did in mentioned class:
   ```
     override def output: Seq[Attribute] =
       Seq(AttributeReference("Total Rows Updated", LongType, nullable = 
false)())
   ```
   
   plan is similar to yours but has one more line `Total Rows Updated: bigint`:
   ```
   == Parsed Logical Plan ==
   UpdateTable 'UnresolvedRelation `test_return_row_count`, [b], select 'ddd' 
from test_return_row_count, test_return_row_count, where a = 'ccc'
   
   == Analyzed Logical Plan ==
   Total Rows Updated: bigint
   ProjectForUpdate 'UnresolvedRelation `test_return_row_count`, [b]
   +- Project [a#103, b#104, c#105, tupleId#282, b-updatedColumn#283]
      +- Filter (a#103 = ccc)
         +- SubqueryAlias test_return_row_count
            +- SubqueryAlias test_return_row_count
               +- Project [a#103, b#104, c#105, tupleId#282, ddd AS 
b-updatedColumn#283]
                  +- SubqueryAlias test_return_row_count
                     +- Project [a#103, b#104, c#105, UDF:getTupleId() AS 
tupleId#282]
                        +- SubqueryAlias test_return_row_count
                           +- SubqueryAlias test_return_row_count
                              +- Relation[a#103,b#104,c#105] 
CarbonDatasourceHadoopRelation
   
   == Optimized Logical Plan ==
   CarbonProjectForUpdateCommand Project [a#103, c#105, UDF:getTupleId() AS 
tupleId#282, ddd AS b-updatedColumn#283], test_return_row_count, [b]
   
   == Physical Plan ==
   Execute CarbonProjectForUpdateCommand
      +- CarbonProjectForUpdateCommand Project [a#103, c#105, UDF:getTupleId() 
AS tupleId#282, ddd AS b-updatedColumn#283], test_return_row_count, [b]
   ```
   
   For Spark2.3, please check about following codes in `Dataset.scala`
   ```
     private[sql] def showString(
         _numRows: Int, truncate: Int = 20, vertical: Boolean = false): String 
= {
       val numRows = _numRows.max(0).min(Int.MaxValue - 1)
       val newDf = toDF()
       val castCols = newDf.logicalPlan.output.map { col =>
         // Since binary types in top-level schema fields have a specific 
format to print,
         // so we do not cast them to strings here.
         if (col.dataType == BinaryType) {
           Column(col)
         } else {
           Column(col).cast(StringType)
         }
       }
       val takeResult = newDf.select(castCols: _*).take(numRows + 1)
   ...
     }
   
     def select(cols: Column*): DataFrame = withPlan {
       Project(cols.map(_.named), planWithBarrier)
     }
   ```
   
   see https://github.com/apache/spark/pull/20214/

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to