[GitHub] [incubator-hudi] umehrot2 commented on a change in pull request #1421: [HUDI-724] Parallelize getSmallFiles for partitions

2020-03-20 Thread GitBox
umehrot2 commented on a change in pull request #1421: [HUDI-724] Parallelize 
getSmallFiles for partitions
URL: https://github.com/apache/incubator-hudi/pull/1421#discussion_r395545539
 
 

 ##
 File path: 
hudi-client/src/main/java/org/apache/hudi/client/HoodieWriteClient.java
 ##
 @@ -486,11 +486,11 @@ private void 
saveWorkloadProfileMetadataToInflight(WorkloadProfile profile, Hood
 return updateIndexAndCommitIfNeeded(writeStatusRDD, hoodieTable, 
commitTime);
   }
 
-  private Partitioner getPartitioner(HoodieTable table, boolean isUpsert, 
WorkloadProfile profile) {
+  private Partitioner getPartitioner(HoodieTable table, boolean isUpsert, 
WorkloadProfile profile, JavaSparkContext jsc) {
 
 Review comment:
   Do we really need all this passing around of `jsc` object ? We can just 
directly pass it from within this function right, as its inherited.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-hudi] umehrot2 commented on a change in pull request #1421: [HUDI-724] Parallelize getSmallFiles for partitions

2020-03-20 Thread GitBox
umehrot2 commented on a change in pull request #1421: [HUDI-724] Parallelize 
getSmallFiles for partitions
URL: https://github.com/apache/incubator-hudi/pull/1421#discussion_r395553703
 
 

 ##
 File path: 
hudi-client/src/main/java/org/apache/hudi/table/HoodieCopyOnWriteTable.java
 ##
 @@ -602,18 +602,39 @@ private int addUpdateBucket(String fileIdHint) {
   return bucket;
 }
 
-private void assignInserts(WorkloadProfile profile) {
+private void assignInserts(WorkloadProfile profile, JavaSparkContext jsc) {
   // for new inserts, compute buckets depending on how many records we 
have for each partition
   Set partitionPaths = profile.getPartitionPaths();
   long averageRecordSize =
   
averageBytesPerRecord(metaClient.getActiveTimeline().getCommitTimeline().filterCompletedInstants(),
   config.getCopyOnWriteRecordSizeEstimate());
   LOG.info("AvgRecordSize => " + averageRecordSize);
+
+  HashMap> partitionSmallFilesMap = new 
HashMap<>();
+  if (jsc != null && partitionPaths.size() > 1) {
+//Parellelize the GetSmallFile Operation by using RDDs
 
 Review comment:
   nit: probably remove this comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-hudi] umehrot2 commented on a change in pull request #1421: [HUDI-724] Parallelize getSmallFiles for partitions

2020-03-20 Thread GitBox
umehrot2 commented on a change in pull request #1421: [HUDI-724] Parallelize 
getSmallFiles for partitions
URL: https://github.com/apache/incubator-hudi/pull/1421#discussion_r395562779
 
 

 ##
 File path: 
hudi-client/src/main/java/org/apache/hudi/table/HoodieCopyOnWriteTable.java
 ##
 @@ -602,18 +602,39 @@ private int addUpdateBucket(String fileIdHint) {
   return bucket;
 }
 
-private void assignInserts(WorkloadProfile profile) {
+private void assignInserts(WorkloadProfile profile, JavaSparkContext jsc) {
   // for new inserts, compute buckets depending on how many records we 
have for each partition
   Set partitionPaths = profile.getPartitionPaths();
   long averageRecordSize =
   
averageBytesPerRecord(metaClient.getActiveTimeline().getCommitTimeline().filterCompletedInstants(),
   config.getCopyOnWriteRecordSizeEstimate());
   LOG.info("AvgRecordSize => " + averageRecordSize);
+
+  HashMap> partitionSmallFilesMap = new 
HashMap<>();
+  if (jsc != null && partitionPaths.size() > 1) {
+//Parellelize the GetSmallFile Operation by using RDDs
+List partitionPathsList = new ArrayList<>(partitionPaths);
+JavaRDD partitionPathRdds = 
jsc.parallelize(partitionPathsList, partitionPathsList.size());
+List>> partitionSmallFileTuples =
+partitionPathRdds.map(it -> new Tuple2>(it, getSmallFiles(it))).collect();
+
+for (Tuple2> tuple : partitionSmallFileTuples) 
{
+  partitionSmallFilesMap.put(tuple._1, tuple._2);
+}
 
 Review comment:
   You may want to refactor this to something like:
   ```
   partitionSmallFilesMap = partitionPathRdds.mapToPair((PairFunction>) 
 partitionPath -> new Tuple2<>(partitionPath, 
getSmallFiles(partitionPath))).collectAsMap();
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services