vikramahuja1001 commented on a change in pull request #3917: URL: https://github.com/apache/carbondata/pull/3917#discussion_r512550325
########## File path: core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java ########## @@ -1105,28 +1109,79 @@ public static void cleanSegments(CarbonTable table, List<PartitionSpec> partitio * @throws IOException */ public static void deleteSegment(String tablePath, Segment segment, - List<PartitionSpec> partitionSpecs, - SegmentUpdateStatusManager updateStatusManager) throws Exception { + List<PartitionSpec> partitionSpecs, SegmentUpdateStatusManager updateStatusManager, + SegmentStatus segmentStatus, Boolean isPartitionTable, String timeStamp) + throws Exception { SegmentFileStore fileStore = new SegmentFileStore(tablePath, segment.getSegmentFileName()); List<String> indexOrMergeFiles = fileStore.readIndexFiles(SegmentStatus.SUCCESS, true, FileFactory.getConfiguration()); + List<String> filesToDelete = new ArrayList<>(); Map<String, List<String>> indexFilesMap = fileStore.getIndexFilesMap(); for (Map.Entry<String, List<String>> entry : indexFilesMap.entrySet()) { - FileFactory.deleteFile(entry.getKey()); + // Move the file to the trash folder in case the segment status is insert in progress + if (segmentStatus == SegmentStatus.INSERT_IN_PROGRESS) { Review comment: for the normal table flow, i have changed it to copy to trash by segment, but in case of partition table copying to trash by file because i will have to read the segment file to get the desired carbondata and the index files per segment, which will increase the IO time. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org