[GitHub] [hadoop] chenjunjiedada commented on issue #1242: HDDS-1553: Add metric for rack aware placement policy
chenjunjiedada commented on issue #1242: HDDS-1553: Add metric for rack aware placement policy URL: https://github.com/apache/hadoop/pull/1242#issuecomment-518965753 Close this now, I will align with Sammi for requirement firstly This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chenjunjiedada closed pull request #1242: HDDS-1553: Add metric for rack aware placement policy
chenjunjiedada closed pull request #1242: HDDS-1553: Add metric for rack aware placement policy URL: https://github.com/apache/hadoop/pull/1242 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #1207: Hadoop 16479
bilaharith commented on a change in pull request #1207: Hadoop 16479 URL: https://github.com/apache/hadoop/pull/1207#discussion_r311393454 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java ## @@ -122,4 +123,19 @@ public void testAbfsPathWithHost() throws IOException { assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName()); } -} + @Test + public void testLastModifiedTime() throws IOException { +AzureBlobFileSystem fs = this.getFileSystem(); +Path testFilePath = new Path("childfile1.txt"); +long createStartTime = System.currentTimeMillis(); +fs.create(testFilePath); +long createEndTime = System.currentTimeMillis(); +FileStatus fStat = fs.getFileStatus(testFilePath); +long lastModifiedTime = fStat.getModificationTime(); +assertTrue((createStartTime / 1000) * 1000 - 1 < lastModifiedTime); +// Dividing and multiplying by 1000 to make last 3 digits 0. +// It is observed that modification time is returned with last 3 +// digits 0 always. +assertTrue(createEndTime > lastModifiedTime); + } +} Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #1207: Hadoop 16479
bilaharith commented on a change in pull request #1207: Hadoop 16479 URL: https://github.com/apache/hadoop/pull/1207#discussion_r311393415 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java ## @@ -122,4 +123,19 @@ public void testAbfsPathWithHost() throws IOException { assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName()); } -} + @Test + public void testLastModifiedTime() throws IOException { +AzureBlobFileSystem fs = this.getFileSystem(); +Path testFilePath = new Path("childfile1.txt"); +long createStartTime = System.currentTimeMillis(); +fs.create(testFilePath); +long createEndTime = System.currentTimeMillis(); +FileStatus fStat = fs.getFileStatus(testFilePath); +long lastModifiedTime = fStat.getModificationTime(); +assertTrue((createStartTime / 1000) * 1000 - 1 < lastModifiedTime); +// Dividing and multiplying by 1000 to make last 3 digits 0. +// It is observed that modification time is returned with last 3 +// digits 0 always. +assertTrue(createEndTime > lastModifiedTime); Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #1207: Hadoop 16479
bilaharith commented on a change in pull request #1207: Hadoop 16479 URL: https://github.com/apache/hadoop/pull/1207#discussion_r311393361 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java ## @@ -122,4 +123,19 @@ public void testAbfsPathWithHost() throws IOException { assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName()); } -} + @Test + public void testLastModifiedTime() throws IOException { +AzureBlobFileSystem fs = this.getFileSystem(); +Path testFilePath = new Path("childfile1.txt"); +long createStartTime = System.currentTimeMillis(); +fs.create(testFilePath); +long createEndTime = System.currentTimeMillis(); +FileStatus fStat = fs.getFileStatus(testFilePath); +long lastModifiedTime = fStat.getModificationTime(); +assertTrue((createStartTime / 1000) * 1000 - 1 < lastModifiedTime); Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #1207: Hadoop 16479
bilaharith commented on a change in pull request #1207: Hadoop 16479 URL: https://github.com/apache/hadoop/pull/1207#discussion_r311393166 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java ## @@ -21,6 +21,7 @@ import java.io.IOException; import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.hadoop.fs.FsStatus; Review comment: Removed the import as the same is unused This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1236: HDDS-1918. hadoop-ozone-tools has integration tests run as unit
adoroszlai commented on issue #1236: HDDS-1918. hadoop-ozone-tools has integration tests run as unit URL: https://github.com/apache/hadoop/pull/1236#issuecomment-518958854 Thank you @bharatviswa504 for reviewing and commiting this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
adoroszlai commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518958656 Thank you @xiaoyuyao for review and @bharatviswa504 for review+commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16495) Fix invalid metric types in PrometheusMetricsSink
Akira Ajisaka created HADOOP-16495: -- Summary: Fix invalid metric types in PrometheusMetricsSink Key: HADOOP-16495 URL: https://issues.apache.org/jira/browse/HADOOP-16495 Project: Hadoop Common Issue Type: Bug Components: metrics Reporter: Akira Ajisaka There are some scraping error when using '/prom' endpoint: * invalid metric type "_young _generation counter" * invalid metric type "_old _generation counter" * invalid metric type "apache.hadoop.hdfs.server.datanode.fsdataset.impl._fs_dataset_impl_dfs_used gauge" -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA.
xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA. URL: https://github.com/apache/hadoop/pull/1202#discussion_r311378194 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java ## @@ -759,6 +768,140 @@ public void testReadRequest() throws Exception { } } + @Test + public void testAddBucketAcl() throws Exception { +OzoneBucket ozoneBucket = setupBucket(); +String remoteUserName = "remoteUser"; +OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName, +READ, DEFAULT); + +OzoneObj ozoneObj = OzoneObjInfo.Builder.newBuilder() +.setResType(OzoneObj.ResourceType.BUCKET) +.setStoreType(OzoneObj.StoreType.OZONE) +.setVolumeName(ozoneBucket.getVolumeName()) +.setBucketName(ozoneBucket.getName()).build(); + +boolean addAcl = objectStore.addAcl(ozoneObj, defaultUserAcl); +Assert.assertTrue(addAcl); + +ozoneBucket.addAcls(Collections.singletonList(defaultUserAcl)); +List acls = objectStore.getAcl(ozoneObj); + +Assert.assertTrue(containsAcl(defaultUserAcl, acls)); + +// Add an already existing acl. +addAcl = objectStore.addAcl(ozoneObj, defaultUserAcl); +Assert.assertFalse(addAcl); + +// Add an acl by changing acl type with same type, name and scope. +defaultUserAcl = new OzoneAcl(USER, remoteUserName, +WRITE, DEFAULT); +addAcl = objectStore.addAcl(ozoneObj, defaultUserAcl); +Assert.assertTrue(addAcl); + } + + @Test + public void testRemoveBucketAcl() throws Exception { +OzoneBucket ozoneBucket = setupBucket(); +String remoteUserName = "remoteUser"; +OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName, +READ, DEFAULT); + +OzoneObj ozoneObj = OzoneObjInfo.Builder.newBuilder() +.setResType(OzoneObj.ResourceType.BUCKET) +.setStoreType(OzoneObj.StoreType.OZONE) +.setVolumeName(ozoneBucket.getVolumeName()) +.setBucketName(ozoneBucket.getName()).build(); + +// As by default create bucket we add some default acls in RpcClient. +List acls = objectStore.getAcl(ozoneObj); + +Assert.assertTrue(acls.size() > 0); + +// Remove an existing acl. +boolean removeAcl = objectStore.removeAcl(ozoneObj, acls.get(0)); Review comment: Similarly, OzoneBucket#removeAcls does not handle the case when input acl is a subset of existing acls. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA.
xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA. URL: https://github.com/apache/hadoop/pull/1202#discussion_r311377872 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java ## @@ -759,6 +768,140 @@ public void testReadRequest() throws Exception { } } + @Test + public void testAddBucketAcl() throws Exception { +OzoneBucket ozoneBucket = setupBucket(); +String remoteUserName = "remoteUser"; +OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName, +READ, DEFAULT); + +OzoneObj ozoneObj = OzoneObjInfo.Builder.newBuilder() +.setResType(OzoneObj.ResourceType.BUCKET) +.setStoreType(OzoneObj.StoreType.OZONE) +.setVolumeName(ozoneBucket.getVolumeName()) +.setBucketName(ozoneBucket.getName()).build(); + +boolean addAcl = objectStore.addAcl(ozoneObj, defaultUserAcl); +Assert.assertTrue(addAcl); + +ozoneBucket.addAcls(Collections.singletonList(defaultUserAcl)); Review comment: We may need to remove OzoneBucket#addAcls API as it is the legacy impl based on BucketManager#SetProperty. When we switch it to use setAcls(), the existing acls are not preserved correctly. We could either rename this API to setAcls or remove it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA.
xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA. URL: https://github.com/apache/hadoop/pull/1202#discussion_r311373795 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java ## @@ -124,6 +128,95 @@ public String getBucketName() { return acls; } + /** + * Add an ozoneAcl to list of existing Acl set. + * @param ozoneAcl + * @return true - if successfully added, false if not added or acl is + * already existing in the acl list. + */ + public boolean addAcl(OzoneAcl ozoneAcl) { Review comment: Can we abstract these add/remove logic into common AclUtil class as we can see similar logic in both bucket manager and key manager? For example, public static boolean addAcl(List existingAcls, OzoneAcl newAcl) public static boolean removeAcl(List existingAcls, OzoneAcl newAcl) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA.
xiaoyuyao commented on a change in pull request #1202: HDDS-1884. Support Bucket ACL operations for OM HA. URL: https://github.com/apache/hadoop/pull/1202#discussion_r311376618 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BiFunction.java ## @@ -0,0 +1,11 @@ +package org.apache.hadoop.ozone.util; + +/** + * Defines a functional interface having two inputs and returns boolean as + * output. + */ +@FunctionalInterface +public interface BiFunction { Review comment: NIT: Should we name this ToBooleanBiFunction to follow the Java8 convention? BiFunction is a known public interface defined in JDK with a parametrized return. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ctubbsii commented on a change in pull request #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy
ctubbsii commented on a change in pull request #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy URL: https://github.com/apache/hadoop/pull/1243#discussion_r311375764 ## File path: dev-support/bin/create-release ## @@ -641,7 +641,7 @@ function signartifacts for i in ${ARTIFACTS_DIR}/*; do ${GPG} --use-agent --armor --output "${i}.asc" --detach-sig "${i}" -${GPG} --print-mds "${i}" > "${i}.mds" +shasum -a 512 "${i}" > "${i}.sha512" Review comment: If you add `--tag` to this command, the file format is a bit more self-descriptive. The `--tag` option uses BSD-style checksum, rather than GNU coreutils style, which includes the algorithm name in the file contents explicitly. It also avoids the confusing two space/space-star (text vs. binary comparisons) delimiter of GNU style by always doing binary comparisons. ```suggestion shasum -a 512 --tag "${i}" > "${i}.sha512" ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311372538 ## File path: hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java ## @@ -0,0 +1,108 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.api; + +import org.apache.hadoop.ozone.recon.ReconUtils; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.Mock; +import org.powermock.core.classloader.annotations.PowerMockIgnore; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.modules.junit4.PowerMockRunner; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import static org.junit.Assert.assertEquals; +import static org.powermock.api.mockito.PowerMockito.mock; +import static org.powermock.api.mockito.PowerMockito.when; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +/** + * Test for File size count service. + */ +@RunWith(PowerMockRunner.class) +@PowerMockIgnore({"javax.management.*", "javax.net.ssl.*"}) +@PrepareForTest(ReconUtils.class) +public class TestUtilizationService { + private UtilizationService utilizationService; + @Mock private FileCountBySizeDao fileCountBySizeDao; + private List resultList = new ArrayList<>(); + private int oneKb = 1024; + private int maxBinSize = 41; + + public void setUpResultList() { +for(int i = 0; i < 41; i++){ + resultList.add(new FileCountBySize((long) Math.pow(2, (10+i)), (long) i)); +} + } + + @Test + public void testGetFileCounts() throws IOException { +setUpResultList(); + +utilizationService = mock(UtilizationService.class); +when(utilizationService.getFileCounts()).thenCallRealMethod(); +when(utilizationService.getDao()).thenReturn(fileCountBySizeDao); +when(fileCountBySizeDao.findAll()).thenReturn(resultList); + +utilizationService.getFileCounts(); +verify(utilizationService, times(1)).getFileCounts(); +verify(fileCountBySizeDao, times(1)).findAll(); + +assertEquals(41, resultList.size()); +long fileSize = 4096L; +int index = findIndex(fileSize); +long count = resultList.get(index).getCount(); +assertEquals(index, count); + +fileSize = 1125899906842624L; +index = findIndex(fileSize); +if (index == Integer.MIN_VALUE) { + throw new IOException("File Size larger than permissible file size"); +} + +fileSize = 1025L; +index = findIndex(fileSize); +count = resultList.get(index).getCount(); +assertEquals(index, count); + +fileSize = 25L; +index = findIndex(fileSize); +count = resultList.get(index).getCount(); +assertEquals(index, count); + } + + public int findIndex(long dataSize) { +int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); +if (logValue < 10) { + return 0; +} else { + int index = logValue - 10; + if (index > maxBinSize) { +return Integer.MIN_VALUE; Review comment: This needs to be updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311368897 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -0,0 +1,241 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import com.google.inject.Inject; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.Table; +import org.apache.hadoop.utils.db.TableIterator; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.jooq.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; + +/** + * Class to iterate over the OM DB and store the counts of existing/new + * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon + * fileSize DB. + */ +public class FileSizeCountTask extends ReconDBUpdateTask { + private static final Logger LOG = + LoggerFactory.getLogger(FileSizeCountTask.class); + + private int maxBinSize = -1; + private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB + private long[] upperBoundCount; + private long oneKb = 1024L; + private Collection tables = new ArrayList<>(); + private FileCountBySizeDao fileCountBySizeDao; + + @Inject + public FileSizeCountTask(OMMetadataManager omMetadataManager, + Configuration sqlConfiguration) { +super("FileSizeCountTask"); +try { + tables.add(omMetadataManager.getKeyTable().getName()); + fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration); +} catch (Exception e) { + LOG.error("Unable to fetch Key Table updates ", e); +} +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return oneKb; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { +if (maxBinSize == -1) { + // extra bin to add files > 1PB. + maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1; +} +return maxBinSize; + } + + /** + * Read the Keys from OM snapshot DB and calculate the upper bound of + * File Size it belongs to. + * + * @param omMetadataManager OM Metadata instance. + * @return Pair + */ + @Override + public Pair reprocess(OMMetadataManager omMetadataManager) { +LOG.info("Starting a 'reprocess' run of FileSizeCountTask."); +Table omKeyInfoTable = omMetadataManager.getKeyTable(); +try (TableIterator> +keyIter = omKeyInfoTable.iterator()) { + while (keyIter.hasNext()) { +Table.KeyValue kv = keyIter.next(); +countFileSize(kv.getValue()); + } +} catch (IOException ioEx) { + LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx); + return new ImmutablePair<>(getTaskName(), false); +} +populateFileCountBySizeDB(); + +LOG.info("Completed a 'reprocess' run of FileSizeCountTask."); +return new ImmutablePair<>(getTaskName(), true); + } + + @Override + protected Collection getTaskTables() { +return tables; + } + + void updateCountFromDB() { +// Read - Write operations to DB are in ascending order +// of file size upper bounds. +List resultSet = fileCountBySizeDao.findAll(); +int index = 0; +if (resultSet != null) { + for (FileCountBySize row : resultSet) { +upperBoundCount[index] = row.getCount(); +index++; + } +} + } + + /** + * Read the Keys from update events and update the count of files + * pertaining to a certain upper bound. + * + * @param events Update events - PUT/DELETE. + * @return Pair + */ + @Override + Pair process(OMUpdateEventBatch events) { +
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311371119 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -0,0 +1,241 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import com.google.inject.Inject; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.Table; +import org.apache.hadoop.utils.db.TableIterator; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.jooq.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; + +/** + * Class to iterate over the OM DB and store the counts of existing/new + * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon + * fileSize DB. + */ +public class FileSizeCountTask extends ReconDBUpdateTask { + private static final Logger LOG = + LoggerFactory.getLogger(FileSizeCountTask.class); + + private int maxBinSize = -1; + private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB + private long[] upperBoundCount; + private long oneKb = 1024L; + private Collection tables = new ArrayList<>(); + private FileCountBySizeDao fileCountBySizeDao; + + @Inject + public FileSizeCountTask(OMMetadataManager omMetadataManager, + Configuration sqlConfiguration) { +super("FileSizeCountTask"); +try { + tables.add(omMetadataManager.getKeyTable().getName()); + fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration); +} catch (Exception e) { + LOG.error("Unable to fetch Key Table updates ", e); +} +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return oneKb; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { +if (maxBinSize == -1) { + // extra bin to add files > 1PB. + maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1; +} +return maxBinSize; + } + + /** + * Read the Keys from OM snapshot DB and calculate the upper bound of + * File Size it belongs to. + * + * @param omMetadataManager OM Metadata instance. + * @return Pair + */ + @Override + public Pair reprocess(OMMetadataManager omMetadataManager) { +LOG.info("Starting a 'reprocess' run of FileSizeCountTask."); +Table omKeyInfoTable = omMetadataManager.getKeyTable(); +try (TableIterator> +keyIter = omKeyInfoTable.iterator()) { + while (keyIter.hasNext()) { +Table.KeyValue kv = keyIter.next(); +countFileSize(kv.getValue()); + } +} catch (IOException ioEx) { + LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx); + return new ImmutablePair<>(getTaskName(), false); +} +populateFileCountBySizeDB(); + +LOG.info("Completed a 'reprocess' run of FileSizeCountTask."); +return new ImmutablePair<>(getTaskName(), true); + } + + @Override + protected Collection getTaskTables() { +return tables; + } + + void updateCountFromDB() { +// Read - Write operations to DB are in ascending order +// of file size upper bounds. +List resultSet = fileCountBySizeDao.findAll(); +int index = 0; +if (resultSet != null) { + for (FileCountBySize row : resultSet) { +upperBoundCount[index] = row.getCount(); +index++; + } +} + } + + /** + * Read the Keys from update events and update the count of files + * pertaining to a certain upper bound. + * + * @param events Update events - PUT/DELETE. + * @return Pair + */ + @Override + Pair process(OMUpdateEventBatch events) { +
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311368484 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -0,0 +1,241 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import com.google.inject.Inject; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.Table; +import org.apache.hadoop.utils.db.TableIterator; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.jooq.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; + +/** + * Class to iterate over the OM DB and store the counts of existing/new + * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon + * fileSize DB. + */ +public class FileSizeCountTask extends ReconDBUpdateTask { + private static final Logger LOG = + LoggerFactory.getLogger(FileSizeCountTask.class); + + private int maxBinSize = -1; + private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB + private long[] upperBoundCount; + private long oneKb = 1024L; + private Collection tables = new ArrayList<>(); + private FileCountBySizeDao fileCountBySizeDao; + + @Inject + public FileSizeCountTask(OMMetadataManager omMetadataManager, + Configuration sqlConfiguration) { +super("FileSizeCountTask"); +try { + tables.add(omMetadataManager.getKeyTable().getName()); + fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration); +} catch (Exception e) { + LOG.error("Unable to fetch Key Table updates ", e); +} +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return oneKb; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { +if (maxBinSize == -1) { + // extra bin to add files > 1PB. + maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1; +} +return maxBinSize; + } + + /** + * Read the Keys from OM snapshot DB and calculate the upper bound of + * File Size it belongs to. + * + * @param omMetadataManager OM Metadata instance. + * @return Pair + */ + @Override + public Pair reprocess(OMMetadataManager omMetadataManager) { +LOG.info("Starting a 'reprocess' run of FileSizeCountTask."); +Table omKeyInfoTable = omMetadataManager.getKeyTable(); +try (TableIterator> +keyIter = omKeyInfoTable.iterator()) { + while (keyIter.hasNext()) { +Table.KeyValue kv = keyIter.next(); +countFileSize(kv.getValue()); + } +} catch (IOException ioEx) { + LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx); + return new ImmutablePair<>(getTaskName(), false); +} +populateFileCountBySizeDB(); + +LOG.info("Completed a 'reprocess' run of FileSizeCountTask."); +return new ImmutablePair<>(getTaskName(), true); + } + + @Override + protected Collection getTaskTables() { +return tables; + } + + void updateCountFromDB() { +// Read - Write operations to DB are in ascending order +// of file size upper bounds. +List resultSet = fileCountBySizeDao.findAll(); +int index = 0; +if (resultSet != null) { + for (FileCountBySize row : resultSet) { +upperBoundCount[index] = row.getCount(); +index++; + } +} + } + + /** + * Read the Keys from update events and update the count of files + * pertaining to a certain upper bound. + * + * @param events Update events - PUT/DELETE. + * @return Pair + */ + @Override + Pair process(OMUpdateEventBatch events) { +
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311366032 ## File path: hadoop-ozone/ozone-recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/UtilizationSchemaDefinition.java ## @@ -65,5 +69,12 @@ void createClusterGrowthTable(Connection conn) { .execute(); } - + void createFileSizeCount(Connection conn) { +DSL.using(conn).createTableIfNotExists(FILE_COUNT_BY_SIZE_TABLE_NAME) +.column("file_size_kb", SQLDataType.BIGINT) Review comment: Aren't we storing file size in bytes? Can we change this to just file_size? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311369498 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -0,0 +1,241 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import com.google.inject.Inject; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.Table; +import org.apache.hadoop.utils.db.TableIterator; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.jooq.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; + +/** + * Class to iterate over the OM DB and store the counts of existing/new + * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon + * fileSize DB. + */ +public class FileSizeCountTask extends ReconDBUpdateTask { + private static final Logger LOG = + LoggerFactory.getLogger(FileSizeCountTask.class); + + private int maxBinSize = -1; + private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB + private long[] upperBoundCount; + private long oneKb = 1024L; + private Collection tables = new ArrayList<>(); + private FileCountBySizeDao fileCountBySizeDao; + + @Inject + public FileSizeCountTask(OMMetadataManager omMetadataManager, + Configuration sqlConfiguration) { +super("FileSizeCountTask"); +try { + tables.add(omMetadataManager.getKeyTable().getName()); + fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration); +} catch (Exception e) { + LOG.error("Unable to fetch Key Table updates ", e); +} +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return oneKb; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { +if (maxBinSize == -1) { + // extra bin to add files > 1PB. + maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1; +} +return maxBinSize; + } + + /** + * Read the Keys from OM snapshot DB and calculate the upper bound of + * File Size it belongs to. + * + * @param omMetadataManager OM Metadata instance. + * @return Pair + */ + @Override + public Pair reprocess(OMMetadataManager omMetadataManager) { +LOG.info("Starting a 'reprocess' run of FileSizeCountTask."); +Table omKeyInfoTable = omMetadataManager.getKeyTable(); +try (TableIterator> +keyIter = omKeyInfoTable.iterator()) { + while (keyIter.hasNext()) { +Table.KeyValue kv = keyIter.next(); +countFileSize(kv.getValue()); + } +} catch (IOException ioEx) { + LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx); + return new ImmutablePair<>(getTaskName(), false); +} +populateFileCountBySizeDB(); + +LOG.info("Completed a 'reprocess' run of FileSizeCountTask."); +return new ImmutablePair<>(getTaskName(), true); + } + + @Override + protected Collection getTaskTables() { +return tables; + } + + void updateCountFromDB() { +// Read - Write operations to DB are in ascending order +// of file size upper bounds. +List resultSet = fileCountBySizeDao.findAll(); +int index = 0; +if (resultSet != null) { + for (FileCountBySize row : resultSet) { +upperBoundCount[index] = row.getCount(); +index++; + } +} + } + + /** + * Read the Keys from update events and update the count of files + * pertaining to a certain upper bound. + * + * @param events Update events - PUT/DELETE. + * @return Pair + */ + @Override + Pair process(OMUpdateEventBatch events) { +
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311371872 ## File path: hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java ## @@ -0,0 +1,108 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.api; + +import org.apache.hadoop.ozone.recon.ReconUtils; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.Mock; +import org.powermock.core.classloader.annotations.PowerMockIgnore; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.modules.junit4.PowerMockRunner; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import static org.junit.Assert.assertEquals; +import static org.powermock.api.mockito.PowerMockito.mock; +import static org.powermock.api.mockito.PowerMockito.when; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +/** + * Test for File size count service. + */ +@RunWith(PowerMockRunner.class) +@PowerMockIgnore({"javax.management.*", "javax.net.ssl.*"}) +@PrepareForTest(ReconUtils.class) +public class TestUtilizationService { + private UtilizationService utilizationService; + @Mock private FileCountBySizeDao fileCountBySizeDao; + private List resultList = new ArrayList<>(); + private int oneKb = 1024; + private int maxBinSize = 41; + + public void setUpResultList() { +for(int i = 0; i < 41; i++){ + resultList.add(new FileCountBySize((long) Math.pow(2, (10+i)), (long) i)); +} + } + + @Test + public void testGetFileCounts() throws IOException { +setUpResultList(); + +utilizationService = mock(UtilizationService.class); +when(utilizationService.getFileCounts()).thenCallRealMethod(); +when(utilizationService.getDao()).thenReturn(fileCountBySizeDao); +when(fileCountBySizeDao.findAll()).thenReturn(resultList); + +utilizationService.getFileCounts(); +verify(utilizationService, times(1)).getFileCounts(); +verify(fileCountBySizeDao, times(1)).findAll(); + +assertEquals(41, resultList.size()); +long fileSize = 4096L; +int index = findIndex(fileSize); +long count = resultList.get(index).getCount(); +assertEquals(index, count); + +fileSize = 1125899906842624L; +index = findIndex(fileSize); +if (index == Integer.MIN_VALUE) { Review comment: This is not required This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311374276 ## File path: hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java ## @@ -0,0 +1,129 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.OmMetadataManagerImpl; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.TypedTable; +import org.junit.Test; + +import org.junit.runner.RunWith; +import org.powermock.core.classloader.annotations.PowerMockIgnore; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.modules.junit4.PowerMockRunner; + +import java.io.IOException; + +import static org.junit.Assert.assertEquals; + +import static org.mockito.ArgumentMatchers.anyLong; +import static org.mockito.BDDMockito.given; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.times; +import static org.powermock.api.mockito.PowerMockito.mock; +import static org.powermock.api.mockito.PowerMockito.when; + +/** + * Unit test for Container Key mapper task. + */ +@RunWith(PowerMockRunner.class) +@PowerMockIgnore({"javax.management.*", "javax.net.ssl.*"}) +@PrepareForTest(OmKeyInfo.class) + +public class TestFileSizeCountTask { + @Test + public void testCalculateBinIndex() { +FileSizeCountTask fileSizeCountTask = mock(FileSizeCountTask.class); + +when(fileSizeCountTask.getMaxFileSizeUpperBound()). +thenReturn(1125899906842624L);// 1 PB +when(fileSizeCountTask.getOneKB()).thenReturn(1024L); +when(fileSizeCountTask.getMaxBinSize()).thenReturn(42); +when(fileSizeCountTask.calculateBinIndex(anyLong())).thenCallRealMethod(); + +long fileSize = 1024L;// 1 KB +int binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(1, binIndex); + +fileSize = 1023L; +binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(0, binIndex); + +fileSize = 562949953421312L; // 512 TB +binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(40, binIndex); + +fileSize = 562949953421313L; // (512 TB + 1B) +binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(40, binIndex); + +fileSize = 562949953421311L; // (512 TB - 1B) +binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(39, binIndex); + +fileSize = 1125899906842624L; // 1 PB - last (extra) bin +binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(41, binIndex); + +fileSize = 10L; +binIndex = fileSizeCountTask.calculateBinIndex(fileSize); +assertEquals(7, binIndex); + +fileSize = 1125899906842623L; Review comment: I suppose this is 1 PB - 1B. Can you add a comment for this one and the previous one as well? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311369007 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -0,0 +1,241 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import com.google.inject.Inject; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.Table; +import org.apache.hadoop.utils.db.TableIterator; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.jooq.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; + +/** + * Class to iterate over the OM DB and store the counts of existing/new + * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon + * fileSize DB. + */ +public class FileSizeCountTask extends ReconDBUpdateTask { + private static final Logger LOG = + LoggerFactory.getLogger(FileSizeCountTask.class); + + private int maxBinSize = -1; + private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB + private long[] upperBoundCount; + private long oneKb = 1024L; + private Collection tables = new ArrayList<>(); + private FileCountBySizeDao fileCountBySizeDao; + + @Inject + public FileSizeCountTask(OMMetadataManager omMetadataManager, + Configuration sqlConfiguration) { +super("FileSizeCountTask"); +try { + tables.add(omMetadataManager.getKeyTable().getName()); + fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration); +} catch (Exception e) { + LOG.error("Unable to fetch Key Table updates ", e); +} +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return oneKb; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { +if (maxBinSize == -1) { + // extra bin to add files > 1PB. + maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1; +} +return maxBinSize; + } + + /** + * Read the Keys from OM snapshot DB and calculate the upper bound of + * File Size it belongs to. + * + * @param omMetadataManager OM Metadata instance. + * @return Pair + */ + @Override + public Pair reprocess(OMMetadataManager omMetadataManager) { +LOG.info("Starting a 'reprocess' run of FileSizeCountTask."); +Table omKeyInfoTable = omMetadataManager.getKeyTable(); +try (TableIterator> +keyIter = omKeyInfoTable.iterator()) { + while (keyIter.hasNext()) { +Table.KeyValue kv = keyIter.next(); +countFileSize(kv.getValue()); + } +} catch (IOException ioEx) { + LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx); + return new ImmutablePair<>(getTaskName(), false); +} +populateFileCountBySizeDB(); + +LOG.info("Completed a 'reprocess' run of FileSizeCountTask."); +return new ImmutablePair<>(getTaskName(), true); + } + + @Override + protected Collection getTaskTables() { +return tables; + } + + void updateCountFromDB() { +// Read - Write operations to DB are in ascending order +// of file size upper bounds. +List resultSet = fileCountBySizeDao.findAll(); +int index = 0; +if (resultSet != null) { + for (FileCountBySize row : resultSet) { +upperBoundCount[index] = row.getCount(); +index++; + } +} + } + + /** + * Read the Keys from update events and update the count of files + * pertaining to a certain upper bound. + * + * @param events Update events - PUT/DELETE. + * @return Pair + */ + @Override + Pair process(OMUpdateEventBatch events) { +
[GitHub] [hadoop] hadoop-yetus commented on issue #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy
hadoop-yetus commented on issue #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy URL: https://github.com/apache/hadoop/pull/1243#issuecomment-518941646 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 74 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 | shadedclient | 836 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | shellcheck | 1 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 851 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 1909 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1243/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1243 | | Optional Tests | dupname asflicense shellcheck shelldocs | | uname | Linux c4e0adc87aa2 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9cd211a | | Max. process+thread count | 366 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1243/1/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16494: --- Assignee: Akira Ajisaka Status: Patch Available (was: Open) > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka opened a new pull request #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy
aajisaka opened a new pull request #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy URL: https://github.com/apache/hadoop/pull/1243 JIRA: https://issues.apache.org/jira/browse/HADOOP-16494 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1219: HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl.
bharatviswa504 commented on a change in pull request #1219: HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl. URL: https://github.com/apache/hadoop/pull/1219#discussion_r311364611 ## File path: hadoop-hdds/docs/content/shell/BucketCommands.md ## @@ -26,7 +26,6 @@ Ozone shell supports the following bucket commands. * [delete](#delete) * [info](#info) * [list](#list) - * [update](#update) Review comment: https://issues.apache.org/jira/browse/HDDS-1913 Will address this and also fixing Bucket and RpcClient API's. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 edited a comment on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
bharatviswa504 edited a comment on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518932392 Thank You @adoroszlai for the fix. I will commit this to the trunk and 0.4 branches. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
bharatviswa504 merged pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
bharatviswa504 commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518932392 I will commit this to the trunk and 0.4 branch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
bharatviswa504 commented on a change in pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#discussion_r311362101 ## File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java ## @@ -345,21 +345,23 @@ public void testDoubleBuffer(int iterations, int bucketCount) } // We are doing +1 for volume transaction. - GenericTestUtils.waitFor(() -> - doubleBuffer.getFlushedTransactionCount() == - (bucketCount + 1) * iterations, 100, - 12); + long expectedTransactions = (bucketCount + 1) * iterations; + GenericTestUtils.waitFor(() -> lastAppliedIndex == expectedTransactions, + 100, 12); - Assert.assertTrue(omMetadataManager.countRowsInTable( - omMetadataManager.getVolumeTable()) == iterations); + Assert.assertEquals(expectedTransactions, + doubleBuffer.getFlushedTransactionCount() + ); - Assert.assertTrue(omMetadataManager.countRowsInTable( - omMetadataManager.getBucketTable()) == (bucketCount) * iterations); + Assert.assertEquals(iterations, + omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable()) + ); - Assert.assertTrue(doubleBuffer.getFlushIterations() > 0); + Assert.assertEquals(bucketCount * iterations, + omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable()) + ); - // Check lastAppliedIndex is updated correctly or not. - Assert.assertEquals((bucketCount + 1) * iterations, lastAppliedIndex); Review comment: Yes, you are right, I have missed it. Thanks for the pointer. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chenjunjiedada commented on issue #1242: HDDS-1553: Add metric for rack aware placement policy
chenjunjiedada commented on issue #1242: HDDS-1553: Add metric for rack aware placement policy URL: https://github.com/apache/hadoop/pull/1242#issuecomment-518931264 @ChenSammi , could you please take a look? Does it satisfy your requirement? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chenjunjiedada opened a new pull request #1242: HDDS-1553: Add metric for rack aware placement policy
chenjunjiedada opened a new pull request #1242: HDDS-1553: Add metric for rack aware placement policy URL: https://github.com/apache/hadoop/pull/1242 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
[ https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901678#comment-16901678 ] Prabhu Joseph commented on HADOOP-16457: Thanks [~eyang]. > Hadoop does not work with Kerberos config in hdfs-site.xml for simple security > -- > > Key: HADOOP-16457 > URL: https://issues.apache.org/jira/browse/HADOOP-16457 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Eric Yang >Assignee: Prabhu Joseph >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch > > > When http filter initializers is setup to use StaticUserWebFilter, AuthFilter > is still setup. This prevents datanode to talk to namenode. > Error message in namenode logs: > {code} > 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter > initializers set : > org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer > 2019-07-24 16:06:26,212 WARN > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization failed for hdfs (auth:SIMPLE) for protocol=interface > org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only > accessible by dn/eyang-5.openstacklo...@example.com > {code} > Errors in datanode log: > {code} > 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000 > {code} > The logic in HADOOP-16354 always added AuthFilter regardless security is > enabled or not. This is incorrect. When simple security is chosen and using > StaticUserWebFilter. AutheFilter check should not be required for datanode > to communicate with namenode. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chenjunjiedada closed pull request #1241: HDDS-1553: Add metric for rack aware placement policy
chenjunjiedada closed pull request #1241: HDDS-1553: Add metric for rack aware placement policy URL: https://github.com/apache/hadoop/pull/1241 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chenjunjiedada opened a new pull request #1241: HDDS-1553: Add metric for rack aware placement policy
chenjunjiedada opened a new pull request #1241: HDDS-1553: Add metric for rack aware placement policy URL: https://github.com/apache/hadoop/pull/1241 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
xiaoyuyao commented on a change in pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#discussion_r311359109 ## File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java ## @@ -345,21 +345,23 @@ public void testDoubleBuffer(int iterations, int bucketCount) } // We are doing +1 for volume transaction. - GenericTestUtils.waitFor(() -> - doubleBuffer.getFlushedTransactionCount() == - (bucketCount + 1) * iterations, 100, - 12); + long expectedTransactions = (bucketCount + 1) * iterations; + GenericTestUtils.waitFor(() -> lastAppliedIndex == expectedTransactions, + 100, 12); - Assert.assertTrue(omMetadataManager.countRowsInTable( - omMetadataManager.getVolumeTable()) == iterations); + Assert.assertEquals(expectedTransactions, + doubleBuffer.getFlushedTransactionCount() + ); - Assert.assertTrue(omMetadataManager.countRowsInTable( - omMetadataManager.getBucketTable()) == (bucketCount) * iterations); + Assert.assertEquals(iterations, + omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable()) + ); - Assert.assertTrue(doubleBuffer.getFlushIterations() > 0); + Assert.assertEquals(bucketCount * iterations, + omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable()) + ); - // Check lastAppliedIndex is updated correctly or not. - Assert.assertEquals((bucketCount + 1) * iterations, lastAppliedIndex); Review comment: The waitFor() moved above should have guarantee this condition already. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901636#comment-16901636 ] Akira Ajisaka commented on HADOOP-16494: As [~jeagles] commented, Hadoop project release checker is available (https://checker.apache.org/projs/hadoop.html) and this page should be checked after each release is made. > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Blocker > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant edited a comment on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
bshashikant edited a comment on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#issuecomment-518913410 Thanks @mukul1987 . In ratis, as far as my understanding goes, before taking a snapshot we wait for all the pending applyTrannsaction futures to complete and since now with the patch, the applyTransaction exception is being propagated to Ratis, ideally snapshot creation will fail in Ratis. I will add a test case to verify the same. I will address the remaining review comments as part of the next patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
bshashikant commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#discussion_r311346307 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java ## @@ -609,6 +609,16 @@ void handleNoLeader(RaftGroupId groupId, RoleInfoProto roleInfoProto) { handlePipelineFailure(groupId, roleInfoProto); } + void handleApplyTransactionFailure(RaftGroupId groupId, + RaftProtos.RaftPeerRole role) { +UUID dnId = RatisHelper.toDatanodeId(getServer().getId()); +String msg = +"Ratis Transaction failure in datanode" + dnId + " with role " + role ++ " Triggering pipeline close action."; +triggerPipelineClose(groupId, msg, ClosePipelineInfo.Reason.PIPELINE_FAILED, +false); +stop(); Review comment: As far as i know from previous discussions , the decision was to not take any other transactions on this pipeline at all and kill the RaftServerImpl instance. Any deviation from that conclusion? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
Akira Ajisaka created HADOOP-16494: -- Summary: Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy Key: HADOOP-16494 URL: https://issues.apache.org/jira/browse/HADOOP-16494 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Akira Ajisaka Originally reported by [~ctubbsii]: https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E bq. None of the artifacts seem to have valid detached checksum files that are in compliance with https://www.apache.org/dev/release-distribution There should be some ".shaXXX" files in there, and not just the (optional) ".mds" files. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant edited a comment on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
bshashikant edited a comment on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#issuecomment-518913410 Thanks @mukul1987 . In ratis, as far as my understanding goes, before taking a snapshot we wait for all the pending applyTrannsaction futures to complete and since now with the patch, the applyTransaction exception is being propagated to Ratis, ideally snapshot creation will fail in Ratis. I will address the remaining review comments as part of the next patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
bshashikant commented on issue #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#issuecomment-518913410 Thanks @mukul1987 . In ratis, as fara as my understanding goes, before taking a snapshot we wait for all the pending applyTrannsaction futures to complete and since now with the patch, the applyTransaction exception is being propagated to Ratis, ideally snapshot creation will fail in Ratis. I will address the remaining review comments as part of the next patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311341219 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308069 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { fileCountBySizeDao.insert(newRecord); - } else{ + } else { fileCountBySizeDao.update(newRecord); } } } private void updateUpperBoundCount(OmKeyInfo value, String operation) throws IOException { -int binIndex = calcBinIndex(value.getDataSize()); -if(binIndex == Integer.MIN_VALUE) { +int binIndex = calculateBinIndex(value.getDataSize()); +if (binIndex == Integer.MIN_VALUE) { Review comment: Yes, it was from a previous check where there was an exception for fileSize > permitted value of 1 PB. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pingsutw commented on issue #1240: HDDS-1919. Fix Javadoc in TestAuditParser
pingsutw commented on issue #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#issuecomment-518898781 @bharatviswa504 Thank you so much This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
hadoop-yetus commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518898029 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 128 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 826 | trunk passed | | +1 | compile | 462 | trunk passed | | +1 | checkstyle | 106 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1178 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 226 | trunk passed | | 0 | spotbugs | 520 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 782 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 722 | the patch passed | | +1 | compile | 445 | the patch passed | | +1 | javac | 445 | the patch passed | | +1 | checkstyle | 83 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 735 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 193 | the patch passed | | +1 | findbugs | 663 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 342 | hadoop-hdds in the patch passed. | | -1 | unit | 2424 | hadoop-ozone in the patch failed. | | +1 | asflicense | 64 | The patch does not generate ASF License warnings. | | | | 9555 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.TestStorageContainerManager | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.om.TestScmSafeMode | | | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.client.rpc.TestWatchForCommit | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1238 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f59460a8e48f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8cef9f8 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/testReport/ | | Max. process+thread count | 4111 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
bharatviswa504 merged pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1240: HDDS-1919. Fix Javadoc in TestAuditParser
bharatviswa504 commented on issue #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#issuecomment-518897505 Merging this without CI, as it updates only Javadoc comment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15908) hadoop-build-tools jar is downloaded from remote repository instead of using from local
[ https://issues.apache.org/jira/browse/HADOOP-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901591#comment-16901591 ] Hudson commented on HADOOP-15908: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17053 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17053/]) HADOOP-15908. hadoop-build-tools jar is downloaded from remote (aajisaka: rev 0b0ba70b35d1c1c774d69f5682f24967d40009a8) * (edit) hadoop-project/pom.xml > hadoop-build-tools jar is downloaded from remote repository instead of using > from local > --- > > Key: HADOOP-15908 > URL: https://issues.apache.org/jira/browse/HADOOP-15908 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Minor > Fix For: 3.3.0, 3.2.1 > > Attachments: HADOOP-15908.001.patch, HADOOP-15908.002.patch > > > HADOOP-12893 added "maven-remote-resources-plugin" to hadoop-project/pom.xml > to verify LICENSE.txt and NOTICE.txt files which includes > "hadoop-build-tools" remote resource bundles. > {code} > > org.apache.maven.plugins > maven-remote-resources-plugin > ${maven-remote-resources-plugin.version} > > > > org.apache.hadoop:hadoop-build-tools:${hadoop.version} > > > > > org.apache.hadoop > hadoop-build-tools > ${hadoop.version} > > > > > > process > > > > > {code} > If we build only some module we always download " hadoop-build-tools" from > maven repository. > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ > hadoop-annotations --- > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 684 B/s) > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > (609 B at 547 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 343 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > (0 B at 0 B/s) > {noformat} > If "hadoop-build-tools" jar doesn't exist in maven repository (for example we > try to build new version locally before repository will be created ) we can't > build some module: > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) > on project hadoop-annotations: Execution default of goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: > Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of > its dependencies could not be resolved: Failure to find > org.apache.hadoop:hadoop-build-tools:jar:3.2.0 in > https://repo.maven.apache.org/maven2 was cached in the local repository, > resolution will not be reattempted until the update interval of central has > elapsed or updates are forced -> [Help 1] > {noformat} > Therefore, we need to limit execution of the Remote Resources Plugin only in > the root directory in which the build was run. > To accomplish this, we can use the "runOnlyAtExecutionRoot parameter" > From maven documentation > http://maven.apache.org/plugins/maven-remote-resources-plugin/usage.html -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1239: HDDS-1907. TestOzoneRpcClientWithRatis is failing with ACL errors. Co…
hadoop-yetus commented on issue #1239: HDDS-1907. TestOzoneRpcClientWithRatis is failing with ACL errors. Co… URL: https://github.com/apache/hadoop/pull/1239#issuecomment-518894346 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 44 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 616 | trunk passed | | +1 | compile | 360 | trunk passed | | +1 | checkstyle | 65 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 814 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 165 | trunk passed | | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 617 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 546 | the patch passed | | +1 | compile | 371 | the patch passed | | +1 | javac | 371 | the patch passed | | +1 | checkstyle | 81 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 678 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 163 | the patch passed | | +1 | findbugs | 636 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 297 | hadoop-hdds in the patch passed. | | -1 | unit | 1889 | hadoop-ozone in the patch failed. | | +1 | asflicense | 42 | The patch does not generate ASF License warnings. | | | | 7554 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.om.TestScmSafeMode | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1239 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e333bede783b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 954ff36 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/testReport/ | | Max. process+thread count | 5390 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15908) hadoop-build-tools jar is downloaded from remote repository instead of using from local
[ https://issues.apache.org/jira/browse/HADOOP-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15908: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.1 3.3.0 Status: Resolved (was: Patch Available) Committed this to trunk and branch-3.2. > hadoop-build-tools jar is downloaded from remote repository instead of using > from local > --- > > Key: HADOOP-15908 > URL: https://issues.apache.org/jira/browse/HADOOP-15908 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Minor > Fix For: 3.3.0, 3.2.1 > > Attachments: HADOOP-15908.001.patch, HADOOP-15908.002.patch > > > HADOOP-12893 added "maven-remote-resources-plugin" to hadoop-project/pom.xml > to verify LICENSE.txt and NOTICE.txt files which includes > "hadoop-build-tools" remote resource bundles. > {code} > > org.apache.maven.plugins > maven-remote-resources-plugin > ${maven-remote-resources-plugin.version} > > > > org.apache.hadoop:hadoop-build-tools:${hadoop.version} > > > > > org.apache.hadoop > hadoop-build-tools > ${hadoop.version} > > > > > > process > > > > > {code} > If we build only some module we always download " hadoop-build-tools" from > maven repository. > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ > hadoop-annotations --- > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 684 B/s) > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > (609 B at 547 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 343 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > (0 B at 0 B/s) > {noformat} > If "hadoop-build-tools" jar doesn't exist in maven repository (for example we > try to build new version locally before repository will be created ) we can't > build some module: > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) > on project hadoop-annotations: Execution default of goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: > Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of > its dependencies could not be resolved: Failure to find > org.apache.hadoop:hadoop-build-tools:jar:3.2.0 in > https://repo.maven.apache.org/maven2 was cached in the local repository, > resolution will not be reattempted until the update interval of central has > elapsed or updates are forced -> [Help 1] > {noformat} > Therefore, we need to limit execution of the Remote Resources Plugin only in > the root directory in which the build was run. > To accomplish this, we can use the "runOnlyAtExecutionRoot parameter" > From maven documentation > http://maven.apache.org/plugins/maven-remote-resources-plugin/usage.html -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pingsutw commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
pingsutw commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#discussion_r311328200 ## File path: hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java ## @@ -42,9 +42,9 @@ import java.util.List; /** - * Tests GenerateOzoneRequiredConfigurations. + * Tests TestAuditParser. */ -public class TestAuditParser { +public class AuditParser { Review comment: Sorry for my mistake, I have updated patch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15908) hadoop-build-tools jar is downloaded from remote repository instead of using from local
[ https://issues.apache.org/jira/browse/HADOOP-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15908: --- Component/s: build > hadoop-build-tools jar is downloaded from remote repository instead of using > from local > --- > > Key: HADOOP-15908 > URL: https://issues.apache.org/jira/browse/HADOOP-15908 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Minor > Attachments: HADOOP-15908.001.patch, HADOOP-15908.002.patch > > > HADOOP-12893 added "maven-remote-resources-plugin" to hadoop-project/pom.xml > to verify LICENSE.txt and NOTICE.txt files which includes > "hadoop-build-tools" remote resource bundles. > {code} > > org.apache.maven.plugins > maven-remote-resources-plugin > ${maven-remote-resources-plugin.version} > > > > org.apache.hadoop:hadoop-build-tools:${hadoop.version} > > > > > org.apache.hadoop > hadoop-build-tools > ${hadoop.version} > > > > > > process > > > > > {code} > If we build only some module we always download " hadoop-build-tools" from > maven repository. > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ > hadoop-annotations --- > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 684 B/s) > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > (609 B at 547 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 343 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > (0 B at 0 B/s) > {noformat} > If "hadoop-build-tools" jar doesn't exist in maven repository (for example we > try to build new version locally before repository will be created ) we can't > build some module: > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) > on project hadoop-annotations: Execution default of goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: > Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of > its dependencies could not be resolved: Failure to find > org.apache.hadoop:hadoop-build-tools:jar:3.2.0 in > https://repo.maven.apache.org/maven2 was cached in the local repository, > resolution will not be reattempted until the update interval of central has > elapsed or updates are forced -> [Help 1] > {noformat} > Therefore, we need to limit execution of the Remote Resources Plugin only in > the root directory in which the build was run. > To accomplish this, we can use the "runOnlyAtExecutionRoot parameter" > From maven documentation > http://maven.apache.org/plugins/maven-remote-resources-plugin/usage.html -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1239: HDDS-1907. TestOzoneRpcClientWithRatis is failing with ACL errors. Co…
hadoop-yetus commented on issue #1239: HDDS-1907. TestOzoneRpcClientWithRatis is failing with ACL errors. Co… URL: https://github.com/apache/hadoop/pull/1239#issuecomment-518892268 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 49 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 81 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 14 | hadoop-ozone in trunk failed. | | +1 | compile | 420 | trunk passed | | +1 | checkstyle | 66 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 842 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 164 | trunk passed | | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 638 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 543 | the patch passed | | +1 | compile | 381 | the patch passed | | +1 | javac | 381 | the patch passed | | +1 | checkstyle | 79 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | whitespace | 0 | The patch has 59 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 | whitespace | 1 | The patch 600 line(s) with tabs. | | +1 | shadedclient | 689 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 168 | the patch passed | | +1 | findbugs | 643 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 291 | hadoop-hdds in the patch passed. | | -1 | unit | 1914 | hadoop-ozone in the patch failed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 7183 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.TestMiniOzoneCluster | | | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.om.TestScmSafeMode | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1239 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3610886b8fcf 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 954ff36 | | Default Java | 1.8.0_212 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/whitespace-tabs.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/testReport/ | | Max. process+thread count | 5369 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15908) hadoop-build-tools jar is downloaded from remote repository instead of using from local
[ https://issues.apache.org/jira/browse/HADOOP-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901578#comment-16901578 ] Akira Ajisaka commented on HADOOP-15908: +1, thanks [~oshevchenko] and [~jojochuang]. > hadoop-build-tools jar is downloaded from remote repository instead of using > from local > --- > > Key: HADOOP-15908 > URL: https://issues.apache.org/jira/browse/HADOOP-15908 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Minor > Attachments: HADOOP-15908.001.patch, HADOOP-15908.002.patch > > > HADOOP-12893 added "maven-remote-resources-plugin" to hadoop-project/pom.xml > to verify LICENSE.txt and NOTICE.txt files which includes > "hadoop-build-tools" remote resource bundles. > {code} > > org.apache.maven.plugins > maven-remote-resources-plugin > ${maven-remote-resources-plugin.version} > > > > org.apache.hadoop:hadoop-build-tools:${hadoop.version} > > > > > org.apache.hadoop > hadoop-build-tools > ${hadoop.version} > > > > > > process > > > > > {code} > If we build only some module we always download " hadoop-build-tools" from > maven repository. > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ > hadoop-annotations --- > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 684 B/s) > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > (609 B at 547 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 343 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > (0 B at 0 B/s) > {noformat} > If "hadoop-build-tools" jar doesn't exist in maven repository (for example we > try to build new version locally before repository will be created ) we can't > build some module: > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) > on project hadoop-annotations: Execution default of goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: > Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of > its dependencies could not be resolved: Failure to find > org.apache.hadoop:hadoop-build-tools:jar:3.2.0 in > https://repo.maven.apache.org/maven2 was cached in the local repository, > resolution will not be reattempted until the update interval of central has > elapsed or updates are forced -> [Help 1] > {noformat} > Therefore, we need to limit execution of the Remote Resources Plugin only in > the root directory in which the build was run. > To accomplish this, we can use the "runOnlyAtExecutionRoot parameter" > From maven documentation > http://maven.apache.org/plugins/maven-remote-resources-plugin/usage.html -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1233: HDDS-1915. Remove hadoop script from ozone distribution
bharatviswa504 commented on issue #1233: HDDS-1915. Remove hadoop script from ozone distribution URL: https://github.com/apache/hadoop/pull/1233#issuecomment-518887905 @arp7 can you also take a look at this change. I will wait for a day for others to review if no more comments I will commit it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1236: HDDS-1918. hadoop-ozone-tools has integration tests run as unit
bharatviswa504 commented on issue #1236: HDDS-1918. hadoop-ozone-tools has integration tests run as unit URL: https://github.com/apache/hadoop/pull/1236#issuecomment-518886984 Thank You @adoroszlai for the contribution. I have committed this to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #1236: HDDS-1918. hadoop-ozone-tools has integration tests run as unit
bharatviswa504 merged pull request #1236: HDDS-1918. hadoop-ozone-tools has integration tests run as unit URL: https://github.com/apache/hadoop/pull/1236 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1235: HDDS-1916. Only contract tests are run in ozonefs module
bharatviswa504 commented on issue #1235: HDDS-1916. Only contract tests are run in ozonefs module URL: https://github.com/apache/hadoop/pull/1235#issuecomment-518886605 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
bharatviswa504 commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#discussion_r311321684 ## File path: hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java ## @@ -42,9 +42,9 @@ import java.util.List; /** - * Tests GenerateOzoneRequiredConfigurations. + * Tests TestAuditParser. */ -public class TestAuditParser { +public class AuditParser { Review comment: Hi @pingsutw The class name should be TestAuditParser. Comment for the class should be Test AuditParser. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…
bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by… URL: https://github.com/apache/hadoop/pull/1147#discussion_r311318331 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java ## @@ -0,0 +1,110 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.ozone.om.request.volume.acl; + +import com.google.common.base.Preconditions; +import com.google.common.collect.Lists; +import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction; +import org.apache.hadoop.ozone.OzoneAcl; +import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs; +import org.apache.hadoop.ozone.om.response.OMClientResponse; +import org.apache.hadoop.ozone.om.response.volume.OMVolumeAclOpResponse; +import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos; +import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest; +import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.List; + +/** + * Handles volume add acl request. + */ +public class OMVolumeAddAclRequest extends OMVolumeAclRequest { + private static final Logger LOG = + LoggerFactory.getLogger(OMVolumeAddAclRequest.class); + + private static CheckedBiFunction, + OmVolumeArgs, IOException> volumeAddAclOp; + + static { +volumeAddAclOp = (acls, volArgs) -> volArgs.addAcl(acls.get(0)); + } + + private List ozoneAcls; + private String volumeName; + + public OMVolumeAddAclRequest(OMRequest omRequest) { +super(omRequest, volumeAddAclOp); +OzoneManagerProtocolProtos.AddAclRequest addAclRequest = +getOmRequest().getAddAclRequest(); +Preconditions.checkNotNull(addAclRequest); +ozoneAcls = Lists.newArrayList( +OzoneAcl.fromProtobuf(addAclRequest.getAcl())); +volumeName = addAclRequest.getObj().getPath().substring(1); + } + + @Override + public List getAcls() { +return ozoneAcls; + } + + @Override + public String getVolumeName() { +return volumeName; + } + + private OzoneAcl getAcl() { +return ozoneAcls.get(0); + } + + + @Override + OMResponse.Builder onInit() { +return OMResponse.newBuilder().setCmdType( +OzoneManagerProtocolProtos.Type.AddAcl) +.setStatus(OzoneManagerProtocolProtos.Status.OK).setSuccess(true); + } + + @Override + OMClientResponse onSuccess(OMResponse.Builder omResponse, + OmVolumeArgs omVolumeArgs, boolean result){ +omResponse.setAddAclResponse(OzoneManagerProtocolProtos.AddAclResponse +.newBuilder().setResponse(result).build()); +return new OMVolumeAclOpResponse(omVolumeArgs, omResponse.build()); + } + + @Override + OMClientResponse onFailure(OMResponse.Builder omResponse, + IOException ex) { +return new OMVolumeAclOpResponse(null, +createErrorOMResponse(omResponse, ex)); + } + + @Override + void onComplete(IOException ex) { Review comment: We should setSucess with operationResult flag, because onInit() sets it to true. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…
hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by… URL: https://github.com/apache/hadoop/pull/1147#issuecomment-518882223 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 45 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 582 | trunk passed | | +1 | compile | 360 | trunk passed | | +1 | checkstyle | 72 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 805 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 148 | trunk passed | | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 606 | trunk passed | | -0 | patch | 456 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 565 | the patch passed | | +1 | compile | 367 | the patch passed | | +1 | javac | 367 | the patch passed | | +1 | checkstyle | 74 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 657 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 156 | the patch passed | | +1 | findbugs | 691 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 290 | hadoop-hdds in the patch passed. | | -1 | unit | 2975 | hadoop-ozone in the patch failed. | | +1 | asflicense | 41 | The patch does not generate ASF License warnings. | | | | 8582 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.hdds.scm.pipeline.TestNode2PipelineMap | | | hadoop.ozone.client.rpc.TestBCSID | | | hadoop.ozone.om.TestScmSafeMode | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestCommitWatcher | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1147 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 21d94a3cc585 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 22430c1 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/testReport/ | | Max. process+thread count | 4167 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pingsutw commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
pingsutw commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#discussion_r311316711 ## File path: hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java ## @@ -42,7 +42,7 @@ import java.util.List; /** - * Tests GenerateOzoneRequiredConfigurations. + * Tests TestAuditParser. Review comment: @bharatviswa504 Thanks for your review I already updated my patch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
bharatviswa504 commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#discussion_r311315299 ## File path: hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java ## @@ -42,7 +42,7 @@ import java.util.List; /** - * Tests GenerateOzoneRequiredConfigurations. + * Tests TestAuditParser. Review comment: It should be mentioned as AuditParser. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
bharatviswa504 commented on a change in pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240#discussion_r311315299 ## File path: hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java ## @@ -42,7 +42,7 @@ import java.util.List; /** - * Tests GenerateOzoneRequiredConfigurations. + * Tests TestAuditParser. Review comment: It should be AuditParser. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pingsutw opened a new pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser
pingsutw opened a new pull request #1240: HDDS-1919. Fix Javadoc in TestAuditParser URL: https://github.com/apache/hadoop/pull/1240 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311309493 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { Review comment: Yes, it should be `LONG.MAX_VALUE`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308884 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { Review comment: Sure, it is an extra bin to add files > maxFileSizeUpperBound. Also, did you mean LONG.MAX_VALUE? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
bharatviswa504 commented on a change in pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#discussion_r311309048 ## File path: hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java ## @@ -345,21 +345,23 @@ public void testDoubleBuffer(int iterations, int bucketCount) } // We are doing +1 for volume transaction. - GenericTestUtils.waitFor(() -> - doubleBuffer.getFlushedTransactionCount() == - (bucketCount + 1) * iterations, 100, - 12); + long expectedTransactions = (bucketCount + 1) * iterations; + GenericTestUtils.waitFor(() -> lastAppliedIndex == expectedTransactions, + 100, 12); - Assert.assertTrue(omMetadataManager.countRowsInTable( - omMetadataManager.getVolumeTable()) == iterations); + Assert.assertEquals(expectedTransactions, + doubleBuffer.getFlushedTransactionCount() + ); - Assert.assertTrue(omMetadataManager.countRowsInTable( - omMetadataManager.getBucketTable()) == (bucketCount) * iterations); + Assert.assertEquals(iterations, + omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable()) + ); - Assert.assertTrue(doubleBuffer.getFlushIterations() > 0); + Assert.assertEquals(bucketCount * iterations, + omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable()) + ); - // Check lastAppliedIndex is updated correctly or not. - Assert.assertEquals((bucketCount + 1) * iterations, lastAppliedIndex); Review comment: Why this check lastAppliedIndex is removed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308096 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; Review comment: Sure. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308069 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { fileCountBySizeDao.insert(newRecord); - } else{ + } else { fileCountBySizeDao.update(newRecord); } } } private void updateUpperBoundCount(OmKeyInfo value, String operation) throws IOException { -int binIndex = calcBinIndex(value.getDataSize()); -if(binIndex == Integer.MIN_VALUE) { +int binIndex = calculateBinIndex(value.getDataSize()); +if (binIndex == Integer.MIN_VALUE) { Review comment: Yes, it was from a previous check where there was an exception for fileSize > permitted value of 1 B. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311293167 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -65,6 +64,23 @@ public FileSizeCountTask(OMMetadataManager omMetadataManager, } catch (Exception e) { LOG.error("Unable to fetch Key Table updates ", e); } +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return ONE_KB; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { Review comment: Can we change this method to take `fileSize` as an argument? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15908) hadoop-build-tools jar is downloaded from remote repository instead of using from local
[ https://issues.apache.org/jira/browse/HADOOP-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901532#comment-16901532 ] Wei-Chiu Chuang commented on HADOOP-15908: -- [~aajisaka], god of Maven, are you available to review this one? Thank you > hadoop-build-tools jar is downloaded from remote repository instead of using > from local > --- > > Key: HADOOP-15908 > URL: https://issues.apache.org/jira/browse/HADOOP-15908 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Minor > Attachments: HADOOP-15908.001.patch, HADOOP-15908.002.patch > > > HADOOP-12893 added "maven-remote-resources-plugin" to hadoop-project/pom.xml > to verify LICENSE.txt and NOTICE.txt files which includes > "hadoop-build-tools" remote resource bundles. > {code} > > org.apache.maven.plugins > maven-remote-resources-plugin > ${maven-remote-resources-plugin.version} > > > > org.apache.hadoop:hadoop-build-tools:${hadoop.version} > > > > > org.apache.hadoop > hadoop-build-tools > ${hadoop.version} > > > > > > process > > > > > {code} > If we build only some module we always download " hadoop-build-tools" from > maven repository. > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ > hadoop-annotations --- > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 684 B/s) > Downloading from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots: > http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml > (609 B at 547 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml > (791 B at 343 B/s) > Downloading from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > Downloaded from apache.snapshots.https: > https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar > (0 B at 0 B/s) > {noformat} > If "hadoop-build-tools" jar doesn't exist in maven repository (for example we > try to build new version locally before repository will be created ) we can't > build some module: > For example run: > cd hadoop-common-project/ > mvn test > Then we will get the following output: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) > on project hadoop-annotations: Execution default of goal > org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: > Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of > its dependencies could not be resolved: Failure to find > org.apache.hadoop:hadoop-build-tools:jar:3.2.0 in > https://repo.maven.apache.org/maven2 was cached in the local repository, > resolution will not be reattempted until the update interval of central has > elapsed or updates are forced -> [Help 1] > {noformat} > Therefore, we need to limit execution of the Remote Resources Plugin only in > the root directory in which the build was run. > To accomplish this, we can use the "runOnlyAtExecutionRoot parameter" > From maven documentation > http://maven.apache.org/plugins/maven-remote-resources-plugin/usage.html -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311304029 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { Review comment: From the logic to calculate bins, the last bin covers the total count of all the files > maxFileSizeUpperBound. But, while writing to DB, the last bin's key is written as `maxFileSizeUpperBound^2`. In this case, the last bin upper bound will be written as 2PB which is wrong. Can we change this logic to have last bin as `Integer.MAX_VALUE`? And can you verify this in the unit test for API response as well? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311301854 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; Review comment: Can you add a comment as to why we need to subtract this value by 10? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311301668 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) { LOG.error("Unexpected exception while updating key data : {} {}", updatedKey, e.getMessage()); return new ImmutablePair<>(getTaskName(), false); - } finally { -populateFileCountBySizeDB(); } + populateFileCountBySizeDB(); } LOG.info("Completed a 'process' run of FileSizeCountTask."); return new ImmutablePair<>(getTaskName(), true); } /** * Calculate the bin index based on size of the Key. + * index is calculated as the number of right shifts + * needed until dataSize becomes zero. * * @param dataSize Size of the key. * @return int bin index in upperBoundCount */ - private int calcBinIndex(long dataSize) { -if(dataSize >= maxFileSizeUpperBound) { - return Integer.MIN_VALUE; -} else if (dataSize > SIZE_512_TB) { - //given the small difference in 512TB and 512TB + 1B, index for both would - //return same, to differentiate specific condition added. - return maxBinSize - 1; -} -int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2)); -if(logValue < 10){ - return 0; -} else{ - return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10; + int calculateBinIndex(long dataSize) { +int index = 0; +while(dataSize != 0) { + dataSize >>= 1; + index += 1; } +return index < 10 ? 0 : index - 10; } - private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{ -int index = calcBinIndex(omKeyInfo.getDataSize()); -if(index == Integer.MIN_VALUE) { - throw new IOException("File Size larger than permissible file size " - + maxFileSizeUpperBound +" bytes"); + void countFileSize(OmKeyInfo omKeyInfo) { +int index; +if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) { + index = maxBinSize - 1; +} else { + index = calculateBinIndex(omKeyInfo.getDataSize()); } upperBoundCount[index]++; } - private void populateFileCountBySizeDB() { + /** + * Populate DB with the counts of file sizes calculated + * using the dao. + * + */ + void populateFileCountBySizeDB() { for (int i = 0; i < upperBoundCount.length; i++) { long fileSizeUpperBound = (long) Math.pow(2, (10 + i)); FileCountBySize fileCountRecord = fileCountBySizeDao.findById(fileSizeUpperBound); FileCountBySize newRecord = new FileCountBySize(fileSizeUpperBound, upperBoundCount[i]); - if(fileCountRecord == null){ + if (fileCountRecord == null) { fileCountBySizeDao.insert(newRecord); - } else{ + } else { fileCountBySizeDao.update(newRecord); } } } private void updateUpperBoundCount(OmKeyInfo value, String operation) throws IOException { -int binIndex = calcBinIndex(value.getDataSize()); -if(binIndex == Integer.MIN_VALUE) { +int binIndex = calculateBinIndex(value.getDataSize()); +if (binIndex == Integer.MIN_VALUE) { Review comment: This is not required as `calculateBinIndex` will never return `Integer.MIN_VALUE` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311293167 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -65,6 +64,23 @@ public FileSizeCountTask(OMMetadataManager omMetadataManager, } catch (Exception e) { LOG.error("Unable to fetch Key Table updates ", e); } +upperBoundCount = new long[getMaxBinSize()]; + } + + protected long getOneKB() { +return ONE_KB; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { Review comment: Can we make this method to take `fileSize` as an argument? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #919: HADOOP-16158. DistCp to support checksum validation when copy blocks in parallel
jojochuang commented on a change in pull request #919: HADOOP-16158. DistCp to support checksum validation when copy blocks in parallel URL: https://github.com/apache/hadoop/pull/919#discussion_r311298021 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java ## @@ -583,6 +583,63 @@ public static boolean checksumsAreEqual(FileSystem sourceFS, Path source, sourceChecksum.equals(targetChecksum)); } + /** + * Utility to compare file lengths and checksums for source and target. + * + * @param sourceFS FileSystem for the source path. + * @param source The source path. + * @param sourceChecksum The checksum of the source file. If it is null we + * still need to retrieve it through sourceFS. + * @param targetFS FileSystem for the target path. + * @param target The target path. + * @param skipCrc The flag to indicate whether to skip checksums. + * @throws IOException if there's a mismatch in file lengths or checksums. + */ + public static void compareFileLengthsAndChecksums( + FileSystem sourceFS, Path source, FileChecksum sourceChecksum, + FileSystem targetFS, Path target, boolean skipCrc) throws IOException { +long srcLen = sourceFS.getFileStatus(source).getLen(); +long tgtLen = targetFS.getFileStatus(target).getLen(); +if (srcLen != tgtLen) { + throw new IOException( + "Mismatch in length of source:" + source + " (" + srcLen + + ") and target:" + target + " (" + tgtLen + ")"); +} + +//At this point, src & dest lengths are same. if length==0, we skip checksum +if ((srcLen != 0) && (!skipCrc)) { + if (!checksumsAreEqual(sourceFS, source, sourceChecksum, + targetFS, target)) { +StringBuilder errorMessage = +new StringBuilder("Checksum mismatch between ") +.append(source).append(" and ").append(target).append("."); +boolean addSkipHint = false; +String srcScheme = sourceFS.getScheme(); +String targetScheme = targetFS.getScheme(); +if (!srcScheme.equals(targetScheme) +&& !(srcScheme.contains("hdfs") && targetScheme.contains("hdfs"))) { + // the filesystems are different and they aren't both hdfs connectors + errorMessage.append("Source and destination filesystems are of" + + " different types\n") + .append("Their checksum algorithms may be incompatible"); + addSkipHint = true; +} else if (sourceFS.getFileStatus(source).getBlockSize() != +targetFS.getFileStatus(target).getBlockSize()) { + errorMessage.append(" Source and target differ in block-size.\n") + .append(" Use -pb to preserve block-sizes during copy."); + addSkipHint = true; +} +if (addSkipHint) { + errorMessage.append(" You can skip checksum-checks altogether " + + " with -skipcrccheck.\n") Review comment: Take a look at HDFS-13056. It is now possible to verify file level checksum via COMPOSITE_CRC (-Ddfs.checksum.combine.mode=COMPOSITE_CRC) when block size are different or when using different file systems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #919: HADOOP-16158. DistCp to support checksum validation when copy blocks in parallel
jojochuang commented on a change in pull request #919: HADOOP-16158. DistCp to support checksum validation when copy blocks in parallel URL: https://github.com/apache/hadoop/pull/919#discussion_r311296835 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java ## @@ -583,6 +583,63 @@ public static boolean checksumsAreEqual(FileSystem sourceFS, Path source, sourceChecksum.equals(targetChecksum)); } + /** + * Utility to compare file lengths and checksums for source and target. + * + * @param sourceFS FileSystem for the source path. + * @param source The source path. + * @param sourceChecksum The checksum of the source file. If it is null we + * still need to retrieve it through sourceFS. + * @param targetFS FileSystem for the target path. + * @param target The target path. + * @param skipCrc The flag to indicate whether to skip checksums. + * @throws IOException if there's a mismatch in file lengths or checksums. + */ + public static void compareFileLengthsAndChecksums( + FileSystem sourceFS, Path source, FileChecksum sourceChecksum, + FileSystem targetFS, Path target, boolean skipCrc) throws IOException { +long srcLen = sourceFS.getFileStatus(source).getLen(); +long tgtLen = targetFS.getFileStatus(target).getLen(); +if (srcLen != tgtLen) { + throw new IOException( + "Mismatch in length of source:" + source + " (" + srcLen + + ") and target:" + target + " (" + tgtLen + ")"); +} + +//At this point, src & dest lengths are same. if length==0, we skip checksum +if ((srcLen != 0) && (!skipCrc)) { + if (!checksumsAreEqual(sourceFS, source, sourceChecksum, + targetFS, target)) { +StringBuilder errorMessage = +new StringBuilder("Checksum mismatch between ") +.append(source).append(" and ").append(target).append("."); +boolean addSkipHint = false; +String srcScheme = sourceFS.getScheme(); +String targetScheme = targetFS.getScheme(); +if (!srcScheme.equals(targetScheme) +&& !(srcScheme.contains("hdfs") && targetScheme.contains("hdfs"))) { Review comment: is this line redundant? we want to log an error when both file system are different. Doesn't matter if src is hdfs or not. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #1239: HDDS-1907. TestOzoneRpcClientWithRatis is failing with ACL errors. Co…
xiaoyuyao opened a new pull request #1239: HDDS-1907. TestOzoneRpcClientWithRatis is failing with ACL errors. Co… URL: https://github.com/apache/hadoop/pull/1239 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
adoroszlai commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518859394 @bharatviswa504 please review when you have some time This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1234: HDDS-1917. Ignore failing test-cases in TestSecureOzoneRpcClient.
xiaoyuyao commented on a change in pull request #1234: HDDS-1917. Ignore failing test-cases in TestSecureOzoneRpcClient. URL: https://github.com/apache/hadoop/pull/1234#discussion_r311293111 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java ## @@ -2280,6 +2280,7 @@ public void testNativeAclsForVolume() throws Exception { validateOzoneAccessAcl(ozObj); } + @Ignore("This will be fixed when HA support is added to acl operations") Review comment: Instead of Ignore for all subclasses, Can we just ignore these ACL related test cases for Ratis/HA like below at the begin of TestOzoneRpcClientAbstract#testNativeAclsForKey/Volume/Bucket/Prefix? This way, we still have all the coverage of non-HA cases for ACL, which is the main feature of ozone-0.4.1. {code} assumeFalse("Remove this once ACL HA is supported", getClass().equals(TestOzoneRpcClientWithRatis.class)); {code} This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
adoroszlai commented on issue #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518858894 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai opened a new pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
adoroszlai opened a new pull request #1238: HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky URL: https://github.com/apache/hadoop/pull/1238 ## What changes were proposed in this pull request? Currently `testDoubleBuffer` waits for a specific transaction count and checks `lastAppliedIndex`, which is updated last in `flushTransactions` on another thread. I guess this may be the cause of flakiness. By swapping the order in the test, we ensure that when the wait is over, all operations (transaction count increase, etc.) are done. https://issues.apache.org/jira/browse/HDDS-1921 ## How was this patch tested? Ran `TestOzoneManagerDoubleBufferWithOMResponse` a few times. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
[ https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901502#comment-16901502 ] Hudson commented on HADOOP-16457: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17050 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17050/]) HADOOP-16457. Fixed Kerberos activation in ServiceAuthorizationManager. (eyang: rev 22430c10e2c41d7b5e4f0457eedaf5395b2b3c84) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestServiceAuthorization.java > Hadoop does not work with Kerberos config in hdfs-site.xml for simple security > -- > > Key: HADOOP-16457 > URL: https://issues.apache.org/jira/browse/HADOOP-16457 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Eric Yang >Assignee: Prabhu Joseph >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch > > > When http filter initializers is setup to use StaticUserWebFilter, AuthFilter > is still setup. This prevents datanode to talk to namenode. > Error message in namenode logs: > {code} > 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter > initializers set : > org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer > 2019-07-24 16:06:26,212 WARN > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization failed for hdfs (auth:SIMPLE) for protocol=interface > org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only > accessible by dn/eyang-5.openstacklo...@example.com > {code} > Errors in datanode log: > {code} > 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000 > {code} > The logic in HADOOP-16354 always added AuthFilter regardless security is > enabled or not. This is incorrect. When simple security is chosen and using > StaticUserWebFilter. AutheFilter check should not be required for datanode > to communicate with namenode. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster
shwetayakkali commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster URL: https://github.com/apache/hadoop/pull/1146#discussion_r311286958 ## File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java ## @@ -0,0 +1,254 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ozone.recon.tasks; + +import com.google.inject.Inject; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.ozone.om.OMMetadataManager; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.utils.db.Table; +import org.apache.hadoop.utils.db.TableIterator; +import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao; +import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize; +import org.jooq.Configuration; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; + +/** + * Class to iterate over the OM DB and store the counts of existing/new + * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon + * fileSize DB. + */ +public class FileSizeCountTask extends ReconDBUpdateTask { + private static final Logger LOG = + LoggerFactory.getLogger(FileSizeCountTask.class); + + private int maxBinSize; + private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB + private long[] upperBoundCount = new long[maxBinSize]; + private long ONE_KB = 1024L; + private Collection tables = new ArrayList<>(); + private FileCountBySizeDao fileCountBySizeDao; + + @Inject + public FileSizeCountTask(OMMetadataManager omMetadataManager, + Configuration sqlConfiguration) { +super("FileSizeCountTask"); +try { + tables.add(omMetadataManager.getKeyTable().getName()); + fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration); +} catch (Exception e) { + LOG.error("Unable to fetch Key Table updates ", e); +} + } + + protected long getOneKB() { +return ONE_KB; + } + + protected long getMaxFileSizeUpperBound() { +return maxFileSizeUpperBound; + } + + protected int getMaxBinSize() { +return maxBinSize; + } + + /** + * Read the Keys from OM snapshot DB and calculate the upper bound of + * File Size it belongs to. + * + * @param omMetadataManager OM Metadata instance. + * @return Pair + */ + @Override + public Pair reprocess(OMMetadataManager omMetadataManager) { +LOG.info("Starting a 'reprocess' run of FileSizeCountTask."); + +fetchUpperBoundCount("reprocess"); + +Table omKeyInfoTable = omMetadataManager.getKeyTable(); +try (TableIterator> +keyIter = omKeyInfoTable.iterator()) { + while (keyIter.hasNext()) { +Table.KeyValue kv = keyIter.next(); +countFileSize(kv.getValue()); + } +} catch (IOException ioEx) { + LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx); + return new ImmutablePair<>(getTaskName(), false); +} finally { + populateFileCountBySizeDB(); +} + +LOG.info("Completed a 'reprocess' run of FileSizeCountTask."); +return new ImmutablePair<>(getTaskName(), true); + } + + void setMaxBinSize() { +maxBinSize = (int)(long) (Math.log(getMaxFileSizeUpperBound()) +/Math.log(2)) - 10; +maxBinSize += 2; // extra bin to add files > 1PB. + } + + void fetchUpperBoundCount(String type) { +setMaxBinSize(); +if (type.equals("process")) { + //update array with file size count from DB + List resultSet = fileCountBySizeDao.findAll(); + int index = 0; + if (resultSet != null) { +for (FileCountBySize row : resultSet) { + upperBoundCount[index] = row.getCount(); + index++; +} + } +} else { + upperBoundCount = new long[getMaxBinSize()];//initialize array +} + } + + @Override + protected Collection getTaskTables() { +return tables; + } + + /** + * Read the Keys from update events and u
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1219: HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl.
bharatviswa504 commented on a change in pull request #1219: HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl. URL: https://github.com/apache/hadoop/pull/1219#discussion_r311281687 ## File path: hadoop-hdds/docs/content/shell/BucketCommands.md ## @@ -26,7 +26,6 @@ Ozone shell supports the following bucket commands. * [delete](#delete) * [info](#info) * [list](#list) - * [update](#update) Review comment: Ya sure @xiaoyuyao. I will open a new Jira to address this too. We can let this change in to fix only remove CLI part. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl opened a new pull request #1237: HDDS-1920. Place ozone.om.address config key default value in ozone-site.xml
smengcl opened a new pull request #1237: HDDS-1920. Place ozone.om.address config key default value in ozone-site.xml URL: https://github.com/apache/hadoop/pull/1237 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
[ https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-16457: --- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) I just committed this to trunk. Thank you [~Prabhu Joseph]. > Hadoop does not work with Kerberos config in hdfs-site.xml for simple security > -- > > Key: HADOOP-16457 > URL: https://issues.apache.org/jira/browse/HADOOP-16457 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Eric Yang >Assignee: Prabhu Joseph >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch > > > When http filter initializers is setup to use StaticUserWebFilter, AuthFilter > is still setup. This prevents datanode to talk to namenode. > Error message in namenode logs: > {code} > 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter > initializers set : > org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer > 2019-07-24 16:06:26,212 WARN > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization failed for hdfs (auth:SIMPLE) for protocol=interface > org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only > accessible by dn/eyang-5.openstacklo...@example.com > {code} > Errors in datanode log: > {code} > 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000 > {code} > The logic in HADOOP-16354 always added AuthFilter regardless security is > enabled or not. This is incorrect. When simple security is chosen and using > StaticUserWebFilter. AutheFilter check should not be required for datanode > to communicate with namenode. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1219: HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl.
xiaoyuyao commented on a change in pull request #1219: HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl. URL: https://github.com/apache/hadoop/pull/1219#discussion_r311272554 ## File path: hadoop-hdds/docs/content/shell/BucketCommands.md ## @@ -26,7 +26,6 @@ Ozone shell supports the following bucket commands. * [delete](#delete) * [info](#info) * [list](#list) - * [update](#update) Review comment: Should we block the acl update part of BucketManagerImpl#setBucketProperty() as they now require a different permission (WRITE_ACL instead of WRITE)? I see there are few UT assumes setBucketProperty should be able to change acl without differentiation. We can fix his in follow up JIRA and use the current one just to remove the CLI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16275) Upgrade Mockito to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-16275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901420#comment-16901420 ] Hudson commented on HADOOP-16275: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17049 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17049/]) HADOOP-16275. Upgrade Mockito to the latest version. Contributed by (weichiu: rev b77761b0e37703beb2c033029e4c0d5ad1dce794) * (edit) hadoop-project/pom.xml * (edit) LICENSE.txt > Upgrade Mockito to the latest version > - > > Key: HADOOP-16275 > URL: https://issues.apache.org/jira/browse/HADOOP-16275 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Labels: newbie > Fix For: 3.3.0 > > Attachments: HADOOP-16275.001.patch, HADOOP-16275.002.patch > > > HADOOP-14178 upgrade Mockito to 2.23.4. > Now the latest version is 2.27.0. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16275) Upgrade Mockito to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-16275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16275: - Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Thanks all! > Upgrade Mockito to the latest version > - > > Key: HADOOP-16275 > URL: https://issues.apache.org/jira/browse/HADOOP-16275 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Labels: newbie > Fix For: 3.3.0 > > Attachments: HADOOP-16275.001.patch, HADOOP-16275.002.patch > > > HADOOP-14178 upgrade Mockito to 2.23.4. > Now the latest version is 2.27.0. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao merged pull request #1228: HDDS-1901. Fix Ozone HTTP WebConsole Authentication. Contributed by X…
xiaoyuyao merged pull request #1228: HDDS-1901. Fix Ozone HTTP WebConsole Authentication. Contributed by X… URL: https://github.com/apache/hadoop/pull/1228 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #1228: HDDS-1901. Fix Ozone HTTP WebConsole Authentication. Contributed by X…
xiaoyuyao commented on issue #1228: HDDS-1901. Fix Ozone HTTP WebConsole Authentication. Contributed by X… URL: https://github.com/apache/hadoop/pull/1228#issuecomment-518803112 Thanks @arp7 for the review. I will merge this shortly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#discussion_r311191416 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -674,30 +674,54 @@ public void notifyIndexUpdate(long term, long index) { if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) { builder.setCreateContainerSet(createContainerSet); } + CompletableFuture applyTransactionFuture = + new CompletableFuture<>(); // Ensure the command gets executed in a separate thread than // stateMachineUpdater thread which is calling applyTransaction here. - CompletableFuture future = CompletableFuture - .supplyAsync(() -> runCommand(requestProto, builder.build()), + CompletableFuture future = + CompletableFuture.supplyAsync( + () -> runCommandGetResponse(requestProto, builder.build()), getCommandExecutor(requestProto)); - - future.thenAccept(m -> { + future.thenApply(r -> { if (trx.getServerRole() == RaftPeerRole.LEADER) { long startTime = (long) trx.getStateMachineContext(); metrics.incPipelineLatency(cmdType, Time.monotonicNowNanos() - startTime); } - -final Long previous = -applyTransactionCompletionMap -.put(index, trx.getLogEntry().getTerm()); -Preconditions.checkState(previous == null); -if (cmdType == Type.WriteChunk || cmdType == Type.PutSmallFile) { - metrics.incNumBytesCommittedCount( +if (r.getResult() != ContainerProtos.Result.SUCCESS) { + StorageContainerException sce = + new StorageContainerException(r.getMessage(), r.getResult()); + LOG.error(gid + ": ApplyTransaction failed: cmd " + r.getCmdType() + + " logIndex " + index + " Error message: " + r.getMessage() + + " Container Result: " + r.getResult()); + metrics.incNumApplyTransactionsFails(); + ratisServer.handleApplyTransactionFailure(gid, trx.getServerRole()); + // Since the applyTransaction now is completed exceptionally, + // before any further snapshot is taken , the exception will be + // caught in stateMachineUpdater in Ratis and ratis server will + // shutdown. + applyTransactionFuture.completeExceptionally(sce); +} else { + metrics.incNumBytesWrittenCount( requestProto.getWriteChunk().getChunkData().getLen()); + LOG.debug(gid + ": ApplyTransaction completed: cmd " + r.getCmdType() Review comment: if this is a success, then "" Error message: " + r.getMessage()" will not be the right thing to print here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#discussion_r311191008 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -674,30 +674,54 @@ public void notifyIndexUpdate(long term, long index) { if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) { builder.setCreateContainerSet(createContainerSet); } + CompletableFuture applyTransactionFuture = + new CompletableFuture<>(); // Ensure the command gets executed in a separate thread than // stateMachineUpdater thread which is calling applyTransaction here. - CompletableFuture future = CompletableFuture - .supplyAsync(() -> runCommand(requestProto, builder.build()), + CompletableFuture future = + CompletableFuture.supplyAsync( + () -> runCommandGetResponse(requestProto, builder.build()), getCommandExecutor(requestProto)); - - future.thenAccept(m -> { + future.thenApply(r -> { if (trx.getServerRole() == RaftPeerRole.LEADER) { long startTime = (long) trx.getStateMachineContext(); metrics.incPipelineLatency(cmdType, Time.monotonicNowNanos() - startTime); } - -final Long previous = -applyTransactionCompletionMap -.put(index, trx.getLogEntry().getTerm()); -Preconditions.checkState(previous == null); -if (cmdType == Type.WriteChunk || cmdType == Type.PutSmallFile) { - metrics.incNumBytesCommittedCount( +if (r.getResult() != ContainerProtos.Result.SUCCESS) { + StorageContainerException sce = + new StorageContainerException(r.getMessage(), r.getResult()); + LOG.error(gid + ": ApplyTransaction failed: cmd " + r.getCmdType() + + " logIndex " + index + " Error message: " + r.getMessage() + + " Container Result: " + r.getResult()); + metrics.incNumApplyTransactionsFails(); + ratisServer.handleApplyTransactionFailure(gid, trx.getServerRole()); + // Since the applyTransaction now is completed exceptionally, + // before any further snapshot is taken , the exception will be + // caught in stateMachineUpdater in Ratis and ratis server will + // shutdown. + applyTransactionFuture.completeExceptionally(sce); +} else { + metrics.incNumBytesWrittenCount( requestProto.getWriteChunk().getChunkData().getLen()); + LOG.debug(gid + ": ApplyTransaction completed: cmd " + r.getCmdType() Review comment: same here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#discussion_r311181834 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java ## @@ -23,6 +23,11 @@ import org.apache.hadoop.hdds.conf.OzoneConfiguration; import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos; import org.apache.hadoop.hdds.protocol.proto.HddsProtos; +import org.apache.hadoop.hdds.scm.XceiverClientManager; +import org.apache.hadoop.hdds.scm.XceiverClientSpi; +import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException; Review comment: unused. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#discussion_r311179939 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java ## @@ -270,4 +279,73 @@ public void testUnhealthyContainer() throws Exception { Assert.assertEquals(ContainerProtos.Result.CONTAINER_UNHEALTHY, dispatcher.dispatch(request.build(), null).getResult()); } + + @Test + public void testAppyTransactionFailure() throws Exception { +OzoneOutputStream key = +objectStore.getVolume(volumeName).getBucket(bucketName) +.createKey("ratis", 1024, ReplicationType.RATIS, +ReplicationFactor.ONE, new HashMap<>()); +// First write and flush creates a container in the datanode +key.write("ratis".getBytes()); +key.flush(); +key.write("ratis".getBytes()); + +//get the name of a valid container +OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName). +setBucketName(bucketName).setType(HddsProtos.ReplicationType.RATIS) +.setFactor(HddsProtos.ReplicationFactor.ONE).setKeyName("ratis") +.build(); +KeyOutputStream groupOutputStream = (KeyOutputStream) key.getOutputStream(); +List locationInfoList = +groupOutputStream.getLocationInfoList(); +Assert.assertEquals(1, locationInfoList.size()); +OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0); +ContainerData containerData = +cluster.getHddsDatanodes().get(0).getDatanodeStateMachine() +.getContainer().getContainerSet() +.getContainer(omKeyLocationInfo.getContainerID()) +.getContainerData(); +Assert.assertTrue(containerData instanceof KeyValueContainerData); +KeyValueContainerData keyValueContainerData = +(KeyValueContainerData) containerData; +key.close(); + +long containerID = omKeyLocationInfo.getContainerID(); +// delete the container db file +FileUtil.fullyDelete(new File(keyValueContainerData.getContainerPath())); +Pipeline pipeline = cluster.getStorageContainerLocationClient() +.getContainerWithPipeline(containerID).getPipeline(); +XceiverClientSpi client = xceiverClientManager.acquireClient(pipeline); +ContainerProtos.ContainerCommandRequestProto.Builder request = +ContainerProtos.ContainerCommandRequestProto.newBuilder(); +request.setDatanodeUuid(pipeline.getFirstNode().getUuidString()); +request.setCmdType(ContainerProtos.Type.CloseContainer); +request.setContainerID(containerID); +request.setCloseContainer( +ContainerProtos.CloseContainerRequestProto.getDefaultInstance()); +// close container transaction will fail over Ratis and will cause the raft +try { + client.sendCommand(request.build()); + Assert.fail("Expected exception not thrown"); +} catch (IOException e) { +} Review comment: What exception are we expecting here ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart.
mukul1987 commented on a change in pull request #1226: HDDS-1610. applyTransaction failure should not be lost on restart. URL: https://github.com/apache/hadoop/pull/1226#discussion_r311189155 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -674,30 +674,54 @@ public void notifyIndexUpdate(long term, long index) { if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) { builder.setCreateContainerSet(createContainerSet); } + CompletableFuture applyTransactionFuture = + new CompletableFuture<>(); // Ensure the command gets executed in a separate thread than // stateMachineUpdater thread which is calling applyTransaction here. - CompletableFuture future = CompletableFuture - .supplyAsync(() -> runCommand(requestProto, builder.build()), + CompletableFuture future = + CompletableFuture.supplyAsync( + () -> runCommandGetResponse(requestProto, builder.build()), Review comment: lets rename runCommandGetResponse and remove runCommand as all the existing caller of the earlier function runCommand can be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org