kuczoram commented on a change in pull request #1327:
URL: https://github.com/apache/hive/pull/1327#discussion_r464373202



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
##########
@@ -1063,7 +1076,11 @@ public void process(Object row, int tag) throws 
HiveException {
       // RecordUpdater expects to get the actual row, not a serialized version 
of it.  Thus we
       // pass the row rather than recordValue.
       if (conf.getWriteType() == AcidUtils.Operation.NOT_ACID || 
conf.isMmTable() || conf.isCompactionTable()) {
-        rowOutWriters[findWriterOffset(row)].write(recordValue);
+        writerOffset = bucketId;
+        if (!conf.isCompactionTable()) {
+          writerOffset = findWriterOffset(row);
+        }
+        rowOutWriters[writerOffset].write(recordValue);

Review comment:
       They should be in order, because the result temp table for the 
compaction is created like "clustered by (`bucket`) sorted by (`bucket`, 
`originalTransaction`, `rowId`) into 10 buckets". I would assume that because 
of this, the rows in the table will be in order by bucket, originalTransaction 
and rowId. I haven't seen otherwise during my testing.
   I don't think we can close the writers here, because they will be used in 
the closeOp method as well and they are closed there.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org

Reply via email to