[GitHub] [hudi] n3nash commented on a change in pull request #1964: [HUDI-1191] Add incremental meta client API to query partitions changed

2020-08-24 Thread GitBox


n3nash commented on a change in pull request #1964:
URL: https://github.com/apache/hudi/pull/1964#discussion_r476026846



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/TimelineUtils.java
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.common.table.timeline;
+
+import org.apache.hudi.avro.model.HoodieCleanMetadata;
+import org.apache.hudi.common.model.HoodieCommitMetadata;
+import org.apache.hudi.exception.HoodieIOException;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+/**
+ * TimelineUtils provides a common way to query incremental meta-data changes 
for a hoodie table.
+ *
+ * This is useful in multiple places including:
+ * 1) HiveSync - this can be used to query partitions that changed since 
previous sync.
+ * 2) Incremental reads - InputFormats can use this API to queryxw
+ */
+public class TimelineUtils {
+
+  /**
+   * Returns partitions that have new data strictly after commitTime.
+   * Does not include internal operations such as clean in the timeline.
+   */
+  public static List getPartitionsWritten(HoodieTimeline timeline) {
+HoodieTimeline timelineToSync = timeline.getCommitsAndCompactionTimeline();
+return getPartitionsMutated(timelineToSync);
+  }
+
+  /**
+   * Returns partitions that have been modified including internal operations 
such as clean in the passed timeline.
+   */
+  public static List getPartitionsMutated(HoodieTimeline timeline) {

Review comment:
   Sure, getAffectedPartitions is fine @satishkotha, I think initially it 
was getWrittenPartitions or something..





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] n3nash commented on a change in pull request #1964: [HUDI-1191] Add incremental meta client API to query partitions changed

2020-08-24 Thread GitBox


n3nash commented on a change in pull request #1964:
URL: https://github.com/apache/hudi/pull/1964#discussion_r476026846



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/TimelineUtils.java
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.common.table.timeline;
+
+import org.apache.hudi.avro.model.HoodieCleanMetadata;
+import org.apache.hudi.common.model.HoodieCommitMetadata;
+import org.apache.hudi.exception.HoodieIOException;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+/**
+ * TimelineUtils provides a common way to query incremental meta-data changes 
for a hoodie table.
+ *
+ * This is useful in multiple places including:
+ * 1) HiveSync - this can be used to query partitions that changed since 
previous sync.
+ * 2) Incremental reads - InputFormats can use this API to queryxw
+ */
+public class TimelineUtils {
+
+  /**
+   * Returns partitions that have new data strictly after commitTime.
+   * Does not include internal operations such as clean in the timeline.
+   */
+  public static List getPartitionsWritten(HoodieTimeline timeline) {
+HoodieTimeline timelineToSync = timeline.getCommitsAndCompactionTimeline();
+return getPartitionsMutated(timelineToSync);
+  }
+
+  /**
+   * Returns partitions that have been modified including internal operations 
such as clean in the passed timeline.
+   */
+  public static List getPartitionsMutated(HoodieTimeline timeline) {

Review comment:
   Sure, getAffectedPartitions is fine.. @satishkotha 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] n3nash commented on a change in pull request #1964: [HUDI-1191] Add incremental meta client API to query partitions changed

2020-08-24 Thread GitBox


n3nash commented on a change in pull request #1964:
URL: https://github.com/apache/hudi/pull/1964#discussion_r475925314



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/TimelineUtils.java
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.common.table.timeline;
+
+import org.apache.hudi.avro.model.HoodieCleanMetadata;
+import org.apache.hudi.common.model.HoodieCommitMetadata;
+import org.apache.hudi.exception.HoodieIOException;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+/**
+ * TimelineUtils provides a common way to query incremental meta-data changes 
for a hoodie table.
+ *
+ * This is useful in multiple places including:
+ * 1) HiveSync - this can be used to query partitions that changed since 
previous sync.
+ * 2) Incremental reads - InputFormats can use this API to queryxw
+ */
+public class TimelineUtils {
+
+  /**
+   * Returns partitions that have new data strictly after commitTime.
+   * Does not include internal operations such as clean in the timeline.
+   */
+  public static List getPartitionsWritten(HoodieTimeline timeline) {
+HoodieTimeline timelineToSync = timeline.getCommitsAndCompactionTimeline();
+return getPartitionsMutated(timelineToSync);
+  }
+
+  /**
+   * Returns partitions that have been modified including internal operations 
such as clean in the passed timeline.
+   */
+  public static List getPartitionsMutated(HoodieTimeline timeline) {
+return timeline.filterCompletedInstants().getInstants().flatMap(s -> {
+  switch (s.getAction()) {
+case HoodieTimeline.COMMIT_ACTION:
+case HoodieTimeline.DELTA_COMMIT_ACTION:
+  try {
+HoodieCommitMetadata commitMetadata = 
HoodieCommitMetadata.fromBytes(timeline.getInstantDetails(s).get(), 
HoodieCommitMetadata.class);
+return commitMetadata.getPartitionToWriteStats().keySet().stream();
+  } catch (IOException e) {
+throw new HoodieIOException("Failed to get partitions written 
between " + timeline.firstInstant() + " " + timeline.lastInstant(), e);
+  }
+case HoodieTimeline.CLEAN_ACTION:
+  try {
+HoodieCleanMetadata cleanMetadata = 
TimelineMetadataUtils.deserializeHoodieCleanMetadata(timeline.getInstantDetails(s).get());
+return cleanMetadata.getPartitionMetadata().keySet().stream();
+  } catch (IOException e) {
+throw new HoodieIOException("Failed to get partitions cleaned 
between " + timeline.firstInstant() + " " + timeline.lastInstant(), e);
+  }
+case HoodieTimeline.COMPACTION_ACTION:
+  // compaction is not a completed instant.  So no need to consider 
this action.
+case HoodieTimeline.SAVEPOINT_ACTION:
+case HoodieTimeline.ROLLBACK_ACTION:
+case HoodieTimeline.RESTORE_ACTION:
+  return Stream.empty();

Review comment:
   Do you want to throw an exception here for now so it's not treated 
incorrectly ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] n3nash commented on a change in pull request #1964: [HUDI-1191] Add incremental meta client API to query partitions changed

2020-08-24 Thread GitBox


n3nash commented on a change in pull request #1964:
URL: https://github.com/apache/hudi/pull/1964#discussion_r475925115



##
File path: 
hudi-common/src/test/java/org/apache/hudi/common/table/TestTimelineUtils.java
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.common.table;
+
+import org.apache.hudi.avro.model.HoodieCleanMetadata;
+import org.apache.hudi.avro.model.HoodieCleanPartitionMetadata;
+import org.apache.hudi.common.model.HoodieCleaningPolicy;
+import org.apache.hudi.common.model.HoodieCommitMetadata;
+import org.apache.hudi.common.model.HoodieWriteStat;
+import org.apache.hudi.common.table.timeline.HoodieActiveTimeline;
+import org.apache.hudi.common.table.timeline.HoodieInstant;
+import org.apache.hudi.common.table.timeline.HoodieTimeline;
+import org.apache.hudi.common.table.timeline.TimelineMetadataUtils;
+import org.apache.hudi.common.table.timeline.TimelineUtils;
+import org.apache.hudi.common.testutils.HoodieCommonTestHarness;
+import org.apache.hudi.common.util.Option;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+
+public class TestTimelineUtils extends HoodieCommonTestHarness {
+
+  @BeforeEach
+  public void setUp() throws Exception {
+initMetaClient();
+  }
+
+  @Test
+  public void testGetPartitions() throws IOException {
+HoodieActiveTimeline activeTimeline = metaClient.getActiveTimeline();
+HoodieTimeline activeCommitTimeline = activeTimeline.getCommitTimeline();
+assertTrue(activeCommitTimeline.empty());
+
+String olderPartition = "0"; // older partitions that is modified by all 
cleans
+for (int i = 1; i <= 5; i++) {
+  String ts = i + "";
+  HoodieInstant instant = new HoodieInstant(true, 
HoodieTimeline.COMMIT_ACTION, ts);
+  activeTimeline.createNewInstant(instant);
+  activeTimeline.saveAsComplete(instant, Option.of(getCommitMeta(basePath, 
ts, ts, 2)));
+
+  HoodieInstant cleanInstant = new HoodieInstant(true, 
HoodieTimeline.CLEAN_ACTION, ts);
+  activeTimeline.createNewInstant(cleanInstant);
+  activeTimeline.saveAsComplete(cleanInstant, getCleanMeta(olderPartition, 
ts));
+}
+
+metaClient.reloadActiveTimeline();
+
+// verify modified partitions included cleaned data
+List partitions = 
TimelineUtils.getPartitionsMutated(metaClient.getActiveTimeline().findInstantsAfter("1",
 10));
+assertEquals(5, partitions.size());
+assertEquals(partitions, Arrays.asList(new String[]{"0", "2", "3", "4", 
"5"}));
+
+partitions = 
TimelineUtils.getPartitionsMutated(metaClient.getActiveTimeline().findInstantsInRange("1",
 "4"));
+assertEquals(4, partitions.size());
+assertEquals(partitions, Arrays.asList(new String[]{"0", "2", "3", "4"}));
+
+// verify only commit actions
+partitions = 
TimelineUtils.getPartitionsWritten(metaClient.getActiveTimeline().findInstantsAfter("1",
 10));
+assertEquals(4, partitions.size());
+assertEquals(partitions, Arrays.asList(new String[]{"2", "3", "4", "5"}));
+
+partitions = 
TimelineUtils.getPartitionsWritten(metaClient.getActiveTimeline().findInstantsInRange("1",
 "4"));
+assertEquals(3, partitions.size());
+assertEquals(partitions, Arrays.asList(new String[]{"2", "3", "4"}));
+  }
+
+  @Test
+  public void testGetPartitionsUnpartitioned() throws IOException {
+HoodieActiveTimeline activeTimeline = metaClient.getActiveTimeline();
+HoodieTimeline activeCommitTimeline = activeTimeline.getCommitTimeline();
+assertTrue(activeCommitTimeline.empty());
+
+String partitionPath = "";
+for (int i = 1; i <= 5; i++) {
+  String ts = i + "";
+  HoodieInstant instant = new HoodieInstant(true, 
HoodieTimeline.COMMIT_ACTION, ts);
+  activeTimeline.createNewInstant(instant);
+  

[GitHub] [hudi] n3nash commented on a change in pull request #1964: [HUDI-1191] Add incremental meta client API to query partitions changed

2020-08-24 Thread GitBox


n3nash commented on a change in pull request #1964:
URL: https://github.com/apache/hudi/pull/1964#discussion_r475892688



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/HoodieTimeline.java
##
@@ -232,6 +233,12 @@
*/
   Option getInstantDetails(HoodieInstant instant);
 
+  /**
+   * Returns partitions that have been modified in the timeline. This includes 
internal operations such as clean.
+   * Note that this only returns data for completed instants.
+   */
+  List getPartitionsMutated();

Review comment:
   I don't think this makes sense in the timeline as of now. If you take a 
look at the timeline API's,  they only talk about the metadata that has 
changed. `getPartitionsMutated` conceptually is providing what has changed in 
the underlying data as opposed to what has changed in the timeline per se. 
Generally, all of this information should come from the timeline but that 
requires a full redesign on the timeline. Should we add this API here -> 
https://github.com/apache/hudi/blob/master/hudi-client/src/main/java/org/apache/hudi/client/HoodieReadClient.java#L195
 ? And you can wrap this functionality in a TimelineUtils ? 
   When we have clearer design on timeline, we can merge back the TimelineUtils 
to the real timeline...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org