[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940167634 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Ok, I will remove that branch and leave an annotation to remind this . And by the way , I'm trying to add a test as you mentioned. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940050889 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Sorry @rkhachatryan , I'm missing the specific compression flag bit in `StreamStateHandle`. If another DSTL implementation does not use the first bit as a compression flag, there will be problems. Is that possible ? About switching DSTL implementation , do you have some suggestions ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940047906 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Yes. But as long as the DSTL implementation is using `ChangelogStateHandleStreamImpl`, then the cache can be used. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940050889 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Sorry @rkhachatryan , I'm missing the specific compression flag bit in `StreamStateHanle`. If another DSTL implementation does not use the first bit as a compression flag, there will be problems. About switching DSTL implementation , do you have some suggestions ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940050889 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Sorry I'm missing the specific compression flag bit in `StreamStateHanle`. If another DSTL implementation does not use the first bit as a compression flag, there will be problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940047906 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Yes. But as long as the DSTL implementation is using `ChangelogStateHangleStreamImpl`, then the cache can be used. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r940035484 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Only cache `StateChangelogStorageView` for `ChangelogStateHandleStreamImpl` just for avoid the problem of switching dstl implementations. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r939942900 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} Review Comment: Hi @rkhachatryan , I think there are two reasons here can't use `ChangelogStreamHandleReaderWithCache.canBeCached()` : 1. `flink-runtime` should not depend on `flink-dstl-dfs` 2. This logic is slightly different from there, `ChangelogStreamHandleReaderWithCache.canBeCached()` only used to determine whether a `StreamStateHandle` can be cached; and this place is determining whether a `changelogStateHandle` is a `ChangelogStateHandleStreamImpl`. In fact, a `ChangelogStateHandleStreamImpl` may consist of multiple `StreamStateHandles`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r938501754 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/TaskExecutorStateChangelogStoragesManager.java: ## @@ -135,28 +148,109 @@ public void releaseStateChangelogStorageForJob(@Nonnull JobID jobId) { } } +@Nullable +StateChangelogStorageView stateChangelogStorageViewForJob( +@Nonnull JobID jobID, +Configuration configuration, +ChangelogStateHandle changelogStateHandle) +throws IOException { +if (closed) { +throw new IllegalStateException( +"TaskExecutorStateChangelogStoragesManager is already closed and cannot " ++ "register a new StateChangelogStorageView."); +} + +if (!(changelogStateHandle instanceof ChangelogStateHandleStreamImpl)) { +return StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +} + +synchronized (lock) { + Optional> storageView = +changelogStorageViewsByJobId.get(jobID); + +if (storageView == null) { +StateChangelogStorageView loaded = +StateChangelogStorageLoader.loadFromStateHandle( +configuration, changelogStateHandle); +changelogStorageViewsByJobId.put( +jobID, +Optional.of( + (StateChangelogStorageView) +loaded)); Review Comment: This should indeed always return a non-null object, I will amend it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r938498533 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/FsStateChangelogStorageForRecovery.java: ## @@ -33,8 +34,29 @@ public class FsStateChangelogStorageForRecovery implements StateChangelogStorageView { +private final ChangelogStreamHandleReaderWithCache changelogHandleReaderWithCache; + +public FsStateChangelogStorageForRecovery() { +this.changelogHandleReaderWithCache = null; +} + +public FsStateChangelogStorageForRecovery(Configuration configuration) { +this.changelogHandleReaderWithCache = +new ChangelogStreamHandleReaderWithCache(configuration); +} + @Override public StateChangelogHandleReader createReader() { -return new StateChangelogHandleStreamHandleReader(new StateChangeFormat()); +return new StateChangelogHandleStreamHandleReader( +changelogHandleReaderWithCache != null +? new StateChangeIteratorImpl(changelogHandleReaderWithCache) +: new StateChangeIteratorImpl()); Review Comment: Thanks @rkhachatryan ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r938431052 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogStreamHandleReaderWithCache.java: ## @@ -0,0 +1,203 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedFile; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.nio.file.Files; +import java.util.Arrays; +import java.util.Iterator; +import java.util.Random; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.changelog.fs.FsStateChangelogOptions.CACHE_IDLE_TIMEOUT; + +/** StateChangeIterator with local cache. */ +class ChangelogStreamHandleReaderWithCache implements ChangelogStreamHandleReader { +private static final Logger LOG = + LoggerFactory.getLogger(ChangelogStreamHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; +private static final String CACHE_FILE_PREFIX = "dstl"; + +// reference count == 1 means only cache component reference the cache file +private static final int NO_USING_REF_COUNT = 1; + +private final File[] cacheDirectories; +private final AtomicInteger next; + +private final ConcurrentHashMap cache = new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogStreamHandleReaderWithCache(Configuration config) { +this.cacheDirectories = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); +this.next = new AtomicInteger(new Random().nextInt(this.cacheDirectories.length)); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +this.cacheIdleMillis = config.get(CACHE_IDLE_TIMEOUT).toMillis(); +} + +@Override +public DataInputStream openAndSeek(StreamStateHandle handle, Long offset) throws IOException { +if (!canBeCached(handle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +final FileStateHandle fileHandle = (FileStateHandle) handle; +final RefCountedFile refCountedFile = getRefCountedFile(fileHandle); + +FileInputStream fin = openAndSeek(refCountedFile, offset); + +return wrapStream(fileHandle.getFilePath(), fin); +} + +private boolean canBeCached(StreamStateHandle handle) throws IOException { +if (handle instanceof FileStateHandle) { +FileStateHandle fileHandle = (FileStateHandle) handle; +return fileHandle.getFilePath().getFileSystem().isDistributedFS(); +} else { +return false; +} +} + +private RefCountedFile getRefCountedFile(FileStateHandle fileHandle) { +return cache.compute( +fileHandle.getFilePath(), +(key, oldValue) -> { +if (oldValue == null) { +oldValue = downloadToCacheFile(fileHandle); +} +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r938430639 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogStreamHandleReaderWithCache.java: ## @@ -0,0 +1,203 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedFile; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.nio.file.Files; +import java.util.Arrays; +import java.util.Iterator; +import java.util.Random; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.changelog.fs.FsStateChangelogOptions.CACHE_IDLE_TIMEOUT; + +/** StateChangeIterator with local cache. */ +class ChangelogStreamHandleReaderWithCache implements ChangelogStreamHandleReader { +private static final Logger LOG = + LoggerFactory.getLogger(ChangelogStreamHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; +private static final String CACHE_FILE_PREFIX = "dstl"; + +// reference count == 1 means only cache component reference the cache file +private static final int NO_USING_REF_COUNT = 1; + +private final File[] cacheDirectories; +private final AtomicInteger next; + +private final ConcurrentHashMap cache = new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogStreamHandleReaderWithCache(Configuration config) { +this.cacheDirectories = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); +this.next = new AtomicInteger(new Random().nextInt(this.cacheDirectories.length)); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +this.cacheIdleMillis = config.get(CACHE_IDLE_TIMEOUT).toMillis(); +} + +@Override +public DataInputStream openAndSeek(StreamStateHandle handle, Long offset) throws IOException { +if (!canBeCached(handle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +final FileStateHandle fileHandle = (FileStateHandle) handle; +final RefCountedFile refCountedFile = getRefCountedFile(fileHandle); + +FileInputStream fin = openAndSeek(refCountedFile, offset); + +return wrapStream(fileHandle.getFilePath(), fin); +} + +private boolean canBeCached(StreamStateHandle handle) throws IOException { +if (handle instanceof FileStateHandle) { +FileStateHandle fileHandle = (FileStateHandle) handle; +return fileHandle.getFilePath().getFileSystem().isDistributedFS(); +} else { +return false; +} +} + +private RefCountedFile getRefCountedFile(FileStateHandle fileHandle) { +return cache.compute( +fileHandle.getFilePath(), +(key, oldValue) -> { +if (oldValue == null) { +oldValue = downloadToCacheFile(fileHandle); +} +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r934329576 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); Review Comment: Hi @rkhachatryan & @curcur , how about `dstl.dfs.download.local-cache.idle-timeout` ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r934217853 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +this.cacheFileCreator = RefCountedTmpFileCreator.inDirectories(tempFiles); +} + +@Override +public DataInputStream apply(StreamStateHandle handle, Long offset) throws IOException { +if (!(handle instanceof FileStateHandle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); + +final RefCountedBufferingFileStream refCountedFileStream = +cache.computeIfAbsent( +dfsPath, +key -> { +RefCountedBufferingFileStream fileStream = null; +FSDataInputStream handleInputStream = null; + +try { +fileStream = + RefCountedBufferingFileStream.openNew(cacheFileCreator); +handleInputStream = handle.openInputStream(); +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932178917 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); Review Comment: Thanks @rkhachatryan , How about `dstl.dfs.download.local-cache.idle-millis` and make default value same to `PERIODIC_MATERIALIZATION_INTERVAL` ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932170532 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +this.cacheFileCreator = RefCountedTmpFileCreator.inDirectories(tempFiles); +} + +@Override +public DataInputStream apply(StreamStateHandle handle, Long offset) throws IOException { +if (!(handle instanceof FileStateHandle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); + +final RefCountedBufferingFileStream refCountedFileStream = +cache.computeIfAbsent( +dfsPath, +key -> { +RefCountedBufferingFileStream fileStream = null; +FSDataInputStream handleInputStream = null; + +try { +fileStream = + RefCountedBufferingFileStream.openNew(cacheFileCreator); +handleInputStream = handle.openInputStream(); +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932168266 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +this.cacheFileCreator = RefCountedTmpFileCreator.inDirectories(tempFiles); +} + +@Override +public DataInputStream apply(StreamStateHandle handle, Long offset) throws IOException { +if (!(handle instanceof FileStateHandle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); + +final RefCountedBufferingFileStream refCountedFileStream = +cache.computeIfAbsent( +dfsPath, +key -> { +RefCountedBufferingFileStream fileStream = null; +FSDataInputStream handleInputStream = null; + +try { +fileStream = + RefCountedBufferingFileStream.openNew(cacheFileCreator); +handleInputStream = handle.openInputStream(); +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932160253 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +this.cacheFileCreator = RefCountedTmpFileCreator.inDirectories(tempFiles); +} + +@Override +public DataInputStream apply(StreamStateHandle handle, Long offset) throws IOException { +if (!(handle instanceof FileStateHandle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); + +final RefCountedBufferingFileStream refCountedFileStream = +cache.computeIfAbsent( +dfsPath, +key -> { +RefCountedBufferingFileStream fileStream = null; +FSDataInputStream handleInputStream = null; + +try { +fileStream = + RefCountedBufferingFileStream.openNew(cacheFileCreator); +handleInputStream = handle.openInputStream(); +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932154648 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +this.cacheFileCreator = RefCountedTmpFileCreator.inDirectories(tempFiles); +} + +@Override +public DataInputStream apply(StreamStateHandle handle, Long offset) throws IOException { +if (!(handle instanceof FileStateHandle)) { +return wrapAndSeek(handle.openInputStream(), offset); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); + +final RefCountedBufferingFileStream refCountedFileStream = +cache.computeIfAbsent( +dfsPath, +key -> { +RefCountedBufferingFileStream fileStream = null; +FSDataInputStream handleInputStream = null; + +try { +fileStream = + RefCountedBufferingFileStream.openNew(cacheFileCreator); +handleInputStream = handle.openInputStream(); +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932152120 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +// TODO: 2022/5/31 consider adding a new options for cache idle +this.cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +this.cacheFileCreator = RefCountedTmpFileCreator.inDirectories(tempFiles); +} + +@Override +public DataInputStream apply(StreamStateHandle handle, Long offset) throws IOException { Review Comment: Thanks for the guidance @rkhachatryan ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932143434 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, +AutoCloseable { +private static final Logger LOG = LoggerFactory.getLogger(ChangelogHandleReaderWithCache.class); + +private static final String CACHE_FILE_SUB_DIR = "dstl-cache-file"; + +private final FunctionWithException +cacheFileCreator; +private final ConcurrentMap cache = +new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final long cacheIdleMillis; + +ChangelogHandleReaderWithCache(Configuration config) { +File[] tempFiles = +Arrays.stream(ConfigurationUtils.parseTempDirectories(config)) +.map(path -> new File(path, CACHE_FILE_SUB_DIR)) +.toArray(File[]::new); + Review Comment: Thanks @rkhachatryan , do you mean spread cache files into separate directories by job id ? Since the cache file is deleted when the TM exits, and it does not need to be managed by the user. I'm not sure if this is necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r932130526 ## flink-runtime/src/main/java/org/apache/flink/runtime/state/changelog/StateChangelogStorageLoader.java: ## @@ -51,6 +55,9 @@ public class StateChangelogStorageLoader { private static final HashMap STATE_CHANGELOG_STORAGE_FACTORIES = new HashMap<>(); +private static final ConcurrentHashMap> +changelogStorageViewsByJobId = new ConcurrentHashMap<>(); Review Comment: Thanks @rkhachatryan . Agree with move this map to `TaskExecutorStateChangelogStoragesManager` for consistent. The current implementation does not consider switching `StateChangelogStorage` implementations, I think we can only cache `StateChangelogStorageView` with `ChangelogStateHandleStreamImpl`, WDYT ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r931045321 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogHandleReaderWithCache.java: ## @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.FSDataInputStream; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.fs.RefCountedBufferingFileStream; +import org.apache.flink.core.fs.RefCountedFileWithStream; +import org.apache.flink.core.fs.RefCountedTmpFileCreator; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.ExceptionUtils; +import org.apache.flink.util.IOUtils; +import org.apache.flink.util.function.BiFunctionWithException; +import org.apache.flink.util.function.FunctionWithException; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.util.Arrays; +import java.util.Iterator; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrap; +import static org.apache.flink.changelog.fs.ChangelogStreamWrapper.wrapAndSeek; +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class ChangelogHandleReaderWithCache +implements BiFunctionWithException, Review Comment: Thanks @rkhachatryan , having a specific interface makes sense, I will modify it later as suggested. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r930584225 ## flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogStateBackend.java: ## @@ -87,9 +87,14 @@ protected CheckpointableKeyedStateBackend restore( String subtaskName = env.getTaskInfo().getTaskNameWithSubtasks(); ExecutionConfig executionConfig = env.getExecutionConfig(); +env.getAsyncOperationsThreadPool(); + ChangelogStateFactory changelogStateFactory = new ChangelogStateFactory(); CheckpointableKeyedStateBackend keyedStateBackend = ChangelogBackendRestoreOperation.restore( +env.getJobID(), +env.getAsyncOperationsThreadPool(), +env.getTaskManagerInfo().getConfiguration(), Review Comment: Hi @fredia , thanks for reply. I'm not suggest pass PERIODIC_MATERIALIZATION_INTERVAL directly. StateChangelogStorage may have different implementations, each one has different options. I think an implementation-specific configuration should not be exposed in the interface. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r930567765 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/StateChangeIteratorWithCache.java: ## @@ -0,0 +1,367 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.memory.DataInputViewStreamWrapper; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.changelog.StateChange; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.file.Files; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; + +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class StateChangeIteratorWithCache extends StateChangeIteratorImpl { +private static final Logger LOG = LoggerFactory.getLogger(StateChangeIteratorWithCache.class); + +private static final String CACHE_FILE_PREFIX = "dstl-"; + +private final File cacheDir; +private final ConcurrentMap cache = new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final ExecutorService downloadExecutor; +private final long cacheIdleMillis; + +StateChangeIteratorWithCache(ExecutorService downloadExecutor, Configuration config) { +// TODO: 2022/5/31 add a new options for cache idle +long cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +File cacheDir = ConfigurationUtils.getRandomTempDirectory(config); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +this.downloadExecutor = downloadExecutor; +this.cacheIdleMillis = cacheIdleMillis; +this.cacheDir = cacheDir; +} + +@Override +public CloseableIterator read(StreamStateHandle handle, long offset) +throws IOException { + +if (!(handle instanceof FileStateHandle)) { +return new StateChangeFormat().read(wrapAndSeek(handle.openInputStream(), offset)); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); +FileCache fileCache = +cache.computeIfAbsent( +dfsPath, +key -> { +FileCache fCache = new FileCache(cacheDir); +downloadExecutor.execute(() -> downloadFile(fileHandle, fCache)); +return fCache; +}); + +FileInputStream fin = fileCache.openAndSeek(offset); + +input = +new DataInputStream(new BufferedInputStream(fin)) { +@Override +public void close() throws IOException { +super.close(); +if (fileCache.getRefCount() == 0) { +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r928365236 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/StateChangeIteratorWithCache.java: ## @@ -0,0 +1,367 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.memory.DataInputViewStreamWrapper; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.changelog.StateChange; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.file.Files; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; + +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class StateChangeIteratorWithCache extends StateChangeIteratorImpl { +private static final Logger LOG = LoggerFactory.getLogger(StateChangeIteratorWithCache.class); + +private static final String CACHE_FILE_PREFIX = "dstl-"; + +private final File cacheDir; +private final ConcurrentMap cache = new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final ExecutorService downloadExecutor; +private final long cacheIdleMillis; + +StateChangeIteratorWithCache(ExecutorService downloadExecutor, Configuration config) { +// TODO: 2022/5/31 add a new options for cache idle +long cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +File cacheDir = ConfigurationUtils.getRandomTempDirectory(config); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +this.downloadExecutor = downloadExecutor; +this.cacheIdleMillis = cacheIdleMillis; +this.cacheDir = cacheDir; +} + +@Override +public CloseableIterator read(StreamStateHandle handle, long offset) +throws IOException { + +if (!(handle instanceof FileStateHandle)) { +return new StateChangeFormat().read(wrapAndSeek(handle.openInputStream(), offset)); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); +FileCache fileCache = +cache.computeIfAbsent( +dfsPath, +key -> { +FileCache fCache = new FileCache(cacheDir); +downloadExecutor.execute(() -> downloadFile(fileHandle, fCache)); +return fCache; +}); + +FileInputStream fin = fileCache.openAndSeek(offset); + +input = +new DataInputStream(new BufferedInputStream(fin)) { +@Override +public void close() throws IOException { +super.close(); +if (fileCache.getRefCount() == 0) { +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r928360002 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/StateChangeIteratorWithCache.java: ## @@ -0,0 +1,367 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.memory.DataInputViewStreamWrapper; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.changelog.StateChange; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.file.Files; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; + +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class StateChangeIteratorWithCache extends StateChangeIteratorImpl { +private static final Logger LOG = LoggerFactory.getLogger(StateChangeIteratorWithCache.class); + +private static final String CACHE_FILE_PREFIX = "dstl-"; + +private final File cacheDir; +private final ConcurrentMap cache = new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final ExecutorService downloadExecutor; +private final long cacheIdleMillis; + +StateChangeIteratorWithCache(ExecutorService downloadExecutor, Configuration config) { +// TODO: 2022/5/31 add a new options for cache idle +long cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +File cacheDir = ConfigurationUtils.getRandomTempDirectory(config); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +this.downloadExecutor = downloadExecutor; +this.cacheIdleMillis = cacheIdleMillis; +this.cacheDir = cacheDir; +} + +@Override +public CloseableIterator read(StreamStateHandle handle, long offset) +throws IOException { + +if (!(handle instanceof FileStateHandle)) { +return new StateChangeFormat().read(wrapAndSeek(handle.openInputStream(), offset)); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); +FileCache fileCache = +cache.computeIfAbsent( +dfsPath, +key -> { +FileCache fCache = new FileCache(cacheDir); +downloadExecutor.execute(() -> downloadFile(fileHandle, fCache)); +return fCache; +}); + +FileInputStream fin = fileCache.openAndSeek(offset); + +input = +new DataInputStream(new BufferedInputStream(fin)) { +@Override +public void close() throws IOException { +super.close(); +if (fileCache.getRefCount() == 0) { +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r928359054 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/StateChangeIteratorWithCache.java: ## @@ -0,0 +1,367 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.memory.DataInputViewStreamWrapper; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.changelog.StateChange; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.file.Files; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; + +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class StateChangeIteratorWithCache extends StateChangeIteratorImpl { +private static final Logger LOG = LoggerFactory.getLogger(StateChangeIteratorWithCache.class); + +private static final String CACHE_FILE_PREFIX = "dstl-"; + +private final File cacheDir; +private final ConcurrentMap cache = new ConcurrentHashMap<>(); +private final ScheduledExecutorService cacheCleanScheduler; +private final ExecutorService downloadExecutor; +private final long cacheIdleMillis; + +StateChangeIteratorWithCache(ExecutorService downloadExecutor, Configuration config) { +// TODO: 2022/5/31 add a new options for cache idle +long cacheIdleMillis = config.get(PERIODIC_MATERIALIZATION_INTERVAL).toMillis(); +File cacheDir = ConfigurationUtils.getRandomTempDirectory(config); + +this.cacheCleanScheduler = +SchedulerFactory.create(1, "ChangelogCacheFileCleanScheduler", LOG); +this.downloadExecutor = downloadExecutor; +this.cacheIdleMillis = cacheIdleMillis; +this.cacheDir = cacheDir; +} + +@Override +public CloseableIterator read(StreamStateHandle handle, long offset) +throws IOException { + +if (!(handle instanceof FileStateHandle)) { +return new StateChangeFormat().read(wrapAndSeek(handle.openInputStream(), offset)); +} + +FileStateHandle fileHandle = (FileStateHandle) handle; +DataInputStream input; + +if (fileHandle.getFilePath().getFileSystem().isDistributedFS()) { + +Path dfsPath = fileHandle.getFilePath(); +FileCache fileCache = +cache.computeIfAbsent( +dfsPath, +key -> { +FileCache fCache = new FileCache(cacheDir); +downloadExecutor.execute(() -> downloadFile(fileHandle, fCache)); +return fCache; +}); + +FileInputStream fin = fileCache.openAndSeek(offset); + +input = +new DataInputStream(new BufferedInputStream(fin)) { +@Override +public void close() throws IOException { +super.close(); +if (fileCache.getRefCount() == 0) { +
[GitHub] [flink] zoltar9264 commented on a diff in pull request #20152: [FLINK-27155][changelog] Reduce multiple reads to the same Changelog …
zoltar9264 commented on code in PR #20152: URL: https://github.com/apache/flink/pull/20152#discussion_r928356369 ## flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/StateChangeIteratorWithCache.java: ## @@ -0,0 +1,367 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.changelog.fs; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ConfigurationUtils; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.memory.DataInputViewStreamWrapper; +import org.apache.flink.runtime.state.StreamStateHandle; +import org.apache.flink.runtime.state.changelog.StateChange; +import org.apache.flink.runtime.state.filesystem.FileStateHandle; +import org.apache.flink.util.CloseableIterator; +import org.apache.flink.util.IOUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedInputStream; +import java.io.DataInputStream; +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.nio.file.Files; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; + +import static org.apache.flink.configuration.StateChangelogOptions.PERIODIC_MATERIALIZATION_INTERVAL; + +/** StateChangeIterator with local cache. */ +class StateChangeIteratorWithCache extends StateChangeIteratorImpl { Review Comment: Thanks @rkhachatryan , I rename the cache component to 'ChangelogHandleReaderWithCache', and make it is only responsible for caching. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org