cnauroth commented on code in PR #7732: URL: https://github.com/apache/hadoop/pull/7732#discussion_r2159563178
########## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalFileSystemConfigKeys.java: ########## @@ -42,5 +42,6 @@ public class LocalFileSystemConfigKeys extends CommonConfigurationKeys { public static final String LOCAL_FS_CLIENT_WRITE_PACKET_SIZE_KEY = "file.client-write-packet-size"; public static final int LOCAL_FS_CLIENT_WRITE_PACKET_SIZE_DEFAULT = 64*1024; + Review Comment: Unnecessary change? ########## hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md: ########## @@ -665,8 +665,48 @@ support through an explicit `hasCapability()` probe: Stream.hasCapability("in:readvectored") ``` -Given the HADOOP-18296 problem with `ChecksumFileSystem` and direct buffers, across all releases, -it is best to avoid using this API in production with direct buffers. +#### Buffer Slicing + +[HADOOP-18296](https://issues.apache.org/jira/browse/HADOOP-18296), +_Memory fragmentation in ChecksumFileSystem Vectored IO implementation_ +highlights that `ChecksumFileSystem` (which the default implementation of `file://` +subclasses), may return buffers which are sliced subsets of buffers allocated +through the `allocate()` function passed in. + +This will happen during reads with and without range coalescing. + +Checksum verification may be disabled by setting the option +`fs.file.checksum.verify` to true (Hadoop 3.4.2 and later). + +```xml +<property> + <name>fs.file.checksum.verify</name> + <value>false</value> +</property> +``` + +(As you would expect, disabling checksum verification means that errors +reading data may not be detected during the read operation. +Use with care in production.) + +Filesystem instances which spit buffersduring vector read operations Review Comment: Typo: "...which split buffers during..." ########## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/TrackingByteBufferPool.java: ########## @@ -0,0 +1,288 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.fs.impl; + +import java.nio.ByteBuffer; +import java.util.IdentityHashMap; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.io.ByteBufferPool; + +import static java.lang.System.identityHashCode; +import static java.util.Objects.requireNonNull; + +/** + * A wrapper {@link ByteBufferPool} implementation that tracks whether all allocated buffers + * are released. + * <p> + * It throws the related exception at {@link #close()} if any buffer remains un-released. + * It also clears the buffers at release so if they continued being used it'll generate errors. + * <p> + * To be used for testing only. + * <p> + * The stacktraces of the allocation are not stored by default because + * it can significantly decrease the unit test performance. + * Configuring this class to log at DEBUG will trigger their collection. + * @see ByteBufferAllocationStacktraceException + * <p> + * Adapted from Parquet class {@code org.apache.parquet.bytes.TrackingByteBufferAllocator}. + */ +public final class TrackingByteBufferPool implements ByteBufferPool, AutoCloseable { Review Comment: Should we put visibility annotations on this? Alternatively, if it's only ever intended for use in tests, should it go into the test path instead of main? ########## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/TrackingByteBufferPool.java: ########## @@ -0,0 +1,288 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.hadoop.fs.impl; + +import java.nio.ByteBuffer; +import java.util.IdentityHashMap; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.io.ByteBufferPool; + +import static java.lang.System.identityHashCode; +import static java.util.Objects.requireNonNull; + +/** + * A wrapper {@link ByteBufferPool} implementation that tracks whether all allocated buffers + * are released. + * <p> + * It throws the related exception at {@link #close()} if any buffer remains un-released. + * It also clears the buffers at release so if they continued being used it'll generate errors. + * <p> + * To be used for testing only. + * <p> + * The stacktraces of the allocation are not stored by default because + * it can significantly decrease the unit test performance. + * Configuring this class to log at DEBUG will trigger their collection. + * @see ByteBufferAllocationStacktraceException + * <p> + * Adapted from Parquet class {@code org.apache.parquet.bytes.TrackingByteBufferAllocator}. Review Comment: I suggest mentioning which version of Parquet you adapted this from. In the future, that will help us understand if our copy needs to pick up additional bug fixes. ########## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java: ########## @@ -293,15 +294,12 @@ public FileDescriptor getFileDescriptor() throws IOException { @Override public boolean hasCapability(String capability) { - // a bit inefficient, but intended to make it easier to add - // new capabilities. switch (capability.toLowerCase(Locale.ENGLISH)) { case StreamCapabilities.IOSTATISTICS: case StreamCapabilities.IOSTATISTICS_CONTEXT: - case StreamCapabilities.VECTOREDIO: return true; default: - return false; + return hasVectorIOCapability(capability); Review Comment: Does this actually change the logic? It seems like `hasVectorIOCapability` checks `StreamCapabilities.VECTOREDIO`, just like the old code did. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org