[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
[ https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13853: Attachment: HDFS-13853-HDFS-13891-02.patch > RBF: RouterAdmin update cmd is overwriting the entry not updating the existing > -- > > Key: HDFS-13853 > URL: https://issues.apache.org/jira/browse/HDFS-13853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13853-HDFS-13891-01.patch, > HDFS-13853-HDFS-13891-02.patch > > > {code:java} > // Create a new entry > Map destMap = new LinkedHashMap<>(); > for (String ns : nss) { > destMap.put(ns, dest); > } > MountTable newEntry = MountTable.newInstance(mount, destMap); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13912) RBF: Add methods to RouterAdmin to set order, read only, and chown
[ https://issues.apache.org/jira/browse/HDFS-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805667#comment-16805667 ] Hadoop QA commented on HDFS-13912: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 42s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-13912 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940009/HDFS-13912-02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 54722398d16e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d9e9e56 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/26550/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26550/testReport/ | | Max. process+thread count | 1364 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/26550/console | | Powered by | Apache Ye
[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
[ https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13853: Assignee: Ayush Saxena (was: Dibyendu Karmakar) Status: Patch Available (was: Open) Uploaded patch. Covered the use case and discussion at HDFS-13912 Pls Review!!! > RBF: RouterAdmin update cmd is overwriting the entry not updating the existing > -- > > Key: HDFS-13853 > URL: https://issues.apache.org/jira/browse/HDFS-13853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13853-HDFS-13891-01.patch > > > {code:java} > // Create a new entry > Map destMap = new LinkedHashMap<>(); > for (String ns : nss) { > destMap.put(ns, dest); > } > MountTable newEntry = MountTable.newInstance(mount, destMap); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
[ https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13853: Attachment: HDFS-13853-HDFS-13891-01.patch > RBF: RouterAdmin update cmd is overwriting the entry not updating the existing > -- > > Key: HDFS-13853 > URL: https://issues.apache.org/jira/browse/HDFS-13853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Attachments: HDFS-13853-HDFS-13891-01.patch > > > {code:java} > // Create a new entry > Map destMap = new LinkedHashMap<>(); > for (String ns : nss) { > destMap.put(ns, dest); > } > MountTable newEntry = MountTable.newInstance(mount, destMap); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer
[ https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805622#comment-16805622 ] Anoop Sam John commented on HDFS-14355: --- Pls give me a day Uma. Will have a look.. I was checking an older version patch and was half way. Seems new one came in as another subtask been committed. > Implement HDFS cache on SCM by using pure java mapped byte buffer > - > > Key: HDFS-14355 > URL: https://issues.apache.org/jira/browse/HDFS-14355 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, > HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, > HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, > HDFS-14355.008.patch > > > This task is to implement the caching to persistent memory using pure > {{java.nio.MappedByteBuffer}}, which could be useful in case native support > isn't available or convenient in some environments or platforms. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1357) ozone s3 shell command has confusing subcommands
[ https://issues.apache.org/jira/browse/HDDS-1357?focusedWorklogId=220839&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220839 ] ASF GitHub Bot logged work on HDDS-1357: Author: ASF GitHub Bot Created on: 30/Mar/19 04:39 Start Date: 30/Mar/19 04:39 Worklog Time Spent: 10m Work Description: ajayydv commented on issue #663: HDDS-1357. ozone s3 shell command has confusing subcommands URL: https://github.com/apache/hadoop/pull/663#issuecomment-478206593 +1 with checkstyle and 1 NIT addressed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220839) Time Spent: 40m (was: 0.5h) > ozone s3 shell command has confusing subcommands > > > Key: HDDS-1357 > URL: https://issues.apache.org/jira/browse/HDDS-1357 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Let's check the potential subcommands of ozone sh: > {code} > [hadoop@om-0 keytabs]$ ozone sh > Incomplete command > Usage: ozone sh [-hV] [--verbose] [-D=]... [COMMAND] > Shell for Ozone object store > --verbose More verbose output. Show the stack trace of the errors. > -D, --set= > -h, --help Show this help message and exit. > -V, --version Print version information and exit. > Commands: > volume, vol Volume specific operations > bucket Bucket specific operations > key Key specific operations > tokenToken specific operations > {code} > This is fine, but for ozone s3: > {code} > [hadoop@om-0 keytabs]$ ozone s3 > Incomplete command > Usage: ozone s3 [-hV] [--verbose] [-D=]... [COMMAND] > Shell for S3 specific operations > --verbose More verbose output. Show the stack trace of the errors. > -D, --set= > -h, --help Show this help message and exit. > -V, --version Print version information and exit. > Commands: > getsecretReturns s3 secret for current user > path Returns the ozone path for S3Bucket > volume, vol Volume specific operations > bucket Bucket specific operations > key Key specific operations > tokenToken specific operations > {code} > This list should contain only the getsecret/path commands and not the > volume/bucket/key subcommands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220829&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220829 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270611467 ## File path: hadoop-ozone/dist/src/main/compose/ozonefs/hadoopo3fs.robot ## @@ -0,0 +1,56 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Test ozone fs usage from Hdfs and Spark +Library OperatingSystem +Library String +Resource../../smoketest/env-compose.robot +Resource../../smoketest/commonlib.robot + +*** Variables *** +${DATANODE_HOST}datanode + +*** Keywords *** + +Test hadoop dfs +[arguments]${imagename} Review comment: since each time imagename is different shall we move this to variable section? (i.e keynames will still not collide) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220829) Time Spent: 1h 10m (was: 1.5h) > OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security > classes > - > > Key: HDDS-1333 > URL: https://issues.apache.org/jira/browse/HDDS-1333 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > The current ozonefs compatibility layer is broken by: HDDS-1299. > The spark jobs (including hadoop 2.7) can't be executed any more: > {code} > 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered > StateStoreCoordinator endpoint > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/hadoop/crypto/key/KeyProviderTokenIssuer > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) > at > org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) > at org.apache.hadoop.fs.FileSystem$Cache.get
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220835&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220835 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270611674 ## File path: hadoop-ozone/dist/src/main/smoketest/createbucketenv.robot ## @@ -0,0 +1,41 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Create bucket and volume for any other testings +Library OperatingSystem +Resourcecommonlib.robot +Test Timeout2 minute + + +*** Variables *** +${volume} vol1 +${bucket} bucket1 + + +*** Keywords *** +Create volume +${result} = Execute ozone sh volume create /${volume} --user hadoop --quota 100TB --root +Should not contain ${result} Failed +Should contain ${result} Creating Volume: ${volume} +Create bucket +Execute ozone sh bucket create /${volume}/${bucket} Review comment: After this shall we check if bucket creation is successful by listing it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220835) Time Spent: 1h 20m (was: 1h 40m) > OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security > classes > - > > Key: HDDS-1333 > URL: https://issues.apache.org/jira/browse/HDDS-1333 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > The current ozonefs compatibility layer is broken by: HDDS-1299. > The spark jobs (including hadoop 2.7) can't be executed any more: > {code} > 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered > StateStoreCoordinator endpoint > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/hadoop/crypto/key/KeyProviderTokenIssuer > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) > at > org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSys
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220837&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220837 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270612610 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java ## @@ -0,0 +1,371 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.ozone; + +import java.io.IOException; +import java.io.InputStream; +import java.net.URI; +import java.util.HashMap; +import java.util.Iterator; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.hdds.client.ReplicationFactor; +import org.apache.hadoop.hdds.client.ReplicationType; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.security.x509.SecurityConfig; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.ozone.OzoneConfigKeys; +import org.apache.hadoop.ozone.client.ObjectStore; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; +import org.apache.hadoop.ozone.client.OzoneKey; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.client.io.OzoneOutputStream; +import org.apache.hadoop.ozone.security.OzoneTokenIdentifier; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenRenewer; + +import org.apache.commons.lang3.StringUtils; +import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Basic Implementation of the OzoneFileSystem calls. + * + * This is the minimal version which doesn't include any statistics. + * + * For full featured version use OzoneClientAdapterImpl. + */ +public class BasicOzoneClientAdapterImpl implements OzoneClientAdapter { + + static final Logger LOG = + LoggerFactory.getLogger(BasicOzoneClientAdapterImpl.class); + + private OzoneClient ozoneClient; + private ObjectStore objectStore; + private OzoneVolume volume; + private OzoneBucket bucket; + private ReplicationType replicationType; + private ReplicationFactor replicationFactor; + private boolean securityEnabled; + + /** + * Create new OzoneClientAdapter implementation. + * + * @param volumeStr Name of the volume to use. + * @param bucketStr Name of the bucket to use + * @throws IOException In case of a problem. + */ + public BasicOzoneClientAdapterImpl(String volumeStr, String bucketStr) + throws IOException { +this(createConf(), volumeStr, bucketStr); + } + + private static OzoneConfiguration createConf() { +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf = new OzoneConfiguration(); +Thread.currentThread().setContextClassLoader(contextClassLoader); +return conf; + } + + public BasicOzoneClientAdapterImpl(OzoneConfiguration conf, String volumeStr, + String bucketStr) + throws IOException { +this(null, -1, conf, volumeStr, bucketStr); + } + + public BasicOzoneClientAdapterImpl(String omHost, int omPort, + Configuration hadoopConf, String volumeStr, String bucketStr) + throws IOException { + +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf; +if (hadoopConf instanceof OzoneConfiguration) { + conf = (OzoneConfiguration) hadoopConf; +} el
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220831&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220831 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270611723 ## File path: hadoop-ozone/dist/src/main/smoketest/env-compose.robot ## @@ -13,4 +13,19 @@ # See the License for the specific language governing permissions and # limitations under the License. -org.apache.hadoop.fs.ozone.OzoneFileSystem +*** Settings *** +Resourcecommonlib.robot Review comment: Shall we add a documentation line? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220831) Time Spent: 1.5h (was: 1h 20m) > OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security > classes > - > > Key: HDDS-1333 > URL: https://issues.apache.org/jira/browse/HDDS-1333 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > The current ozonefs compatibility layer is broken by: HDDS-1299. > The spark jobs (including hadoop 2.7) can't be executed any more: > {code} > 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered > StateStoreCoordinator endpoint > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/hadoop/crypto/key/KeyProviderTokenIssuer > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) > at > org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) > at > org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332) > at > org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) > at > org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) > at > org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724) > at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220828&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220828 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270611057 ## File path: hadoop-ozone/dist/dev-support/bin/dist-layout-stitching ## @@ -114,6 +114,7 @@ run cp "${ROOT}/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore cp -r "${ROOT}/hadoop-hdds/docs/target/classes/docs" ./ #Copy docker compose files -run cp -p -R "${ROOT}/hadoop-ozone/dist/src/main/compose" . Review comment: Is this required for this patch? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220828) Time Spent: 1h 10m (was: 1h) > OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security > classes > - > > Key: HDDS-1333 > URL: https://issues.apache.org/jira/browse/HDDS-1333 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > The current ozonefs compatibility layer is broken by: HDDS-1299. > The spark jobs (including hadoop 2.7) can't be executed any more: > {code} > 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered > StateStoreCoordinator endpoint > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/hadoop/crypto/key/KeyProviderTokenIssuer > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) > at > org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) > at > org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332) > at > org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) > at > org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) > at > org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724) > at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(Nati
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220827&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220827 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270612227 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java ## @@ -0,0 +1,371 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.ozone; + +import java.io.IOException; +import java.io.InputStream; +import java.net.URI; +import java.util.HashMap; +import java.util.Iterator; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.hdds.client.ReplicationFactor; +import org.apache.hadoop.hdds.client.ReplicationType; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.security.x509.SecurityConfig; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.ozone.OzoneConfigKeys; +import org.apache.hadoop.ozone.client.ObjectStore; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; +import org.apache.hadoop.ozone.client.OzoneKey; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.client.io.OzoneOutputStream; +import org.apache.hadoop.ozone.security.OzoneTokenIdentifier; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenRenewer; + +import org.apache.commons.lang3.StringUtils; +import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Basic Implementation of the OzoneFileSystem calls. + * + * This is the minimal version which doesn't include any statistics. + * + * For full featured version use OzoneClientAdapterImpl. + */ +public class BasicOzoneClientAdapterImpl implements OzoneClientAdapter { + + static final Logger LOG = + LoggerFactory.getLogger(BasicOzoneClientAdapterImpl.class); + + private OzoneClient ozoneClient; + private ObjectStore objectStore; + private OzoneVolume volume; + private OzoneBucket bucket; + private ReplicationType replicationType; + private ReplicationFactor replicationFactor; + private boolean securityEnabled; + + /** + * Create new OzoneClientAdapter implementation. + * + * @param volumeStr Name of the volume to use. + * @param bucketStr Name of the bucket to use + * @throws IOException In case of a problem. + */ + public BasicOzoneClientAdapterImpl(String volumeStr, String bucketStr) + throws IOException { +this(createConf(), volumeStr, bucketStr); + } + + private static OzoneConfiguration createConf() { +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf = new OzoneConfiguration(); +Thread.currentThread().setContextClassLoader(contextClassLoader); +return conf; + } + + public BasicOzoneClientAdapterImpl(OzoneConfiguration conf, String volumeStr, + String bucketStr) + throws IOException { +this(null, -1, conf, volumeStr, bucketStr); + } + + public BasicOzoneClientAdapterImpl(String omHost, int omPort, + Configuration hadoopConf, String volumeStr, String bucketStr) + throws IOException { + +ClassLoader contextClassLoader = Review comment: shall we reuse createConf by passing hadoop conf, if absent we can initialize Ozoneconf using default constructor. This is an automated message from the Apache Git
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220833&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220833 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270611099 ## File path: hadoop-ozone/dist/pom.xml ## @@ -120,6 +120,28 @@ + +maven-resources-plugin +3.1.0 + + +copy-resources +compile + + copy-resources + + + ${basedir}/target/compose + + + src/main/compose + true Review comment: Are filtering any files from compose dir? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220833) Time Spent: 1.5h (was: 1h 10m) > OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security > classes > - > > Key: HDDS-1333 > URL: https://issues.apache.org/jira/browse/HDDS-1333 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > The current ozonefs compatibility layer is broken by: HDDS-1299. > The spark jobs (including hadoop 2.7) can't be executed any more: > {code} > 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered > StateStoreCoordinator endpoint > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/hadoop/crypto/key/KeyProviderTokenIssuer > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) > at > org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) > at > org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332) > at > org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) > at > org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) > at > org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724) > at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45) > at sun.reflect
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220836&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220836 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270612708 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java ## @@ -0,0 +1,371 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.ozone; + +import java.io.IOException; +import java.io.InputStream; +import java.net.URI; +import java.util.HashMap; +import java.util.Iterator; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.hdds.client.ReplicationFactor; +import org.apache.hadoop.hdds.client.ReplicationType; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.security.x509.SecurityConfig; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.ozone.OzoneConfigKeys; +import org.apache.hadoop.ozone.client.ObjectStore; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; +import org.apache.hadoop.ozone.client.OzoneKey; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.client.io.OzoneOutputStream; +import org.apache.hadoop.ozone.security.OzoneTokenIdentifier; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenRenewer; + +import org.apache.commons.lang3.StringUtils; +import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Basic Implementation of the OzoneFileSystem calls. + * + * This is the minimal version which doesn't include any statistics. + * + * For full featured version use OzoneClientAdapterImpl. + */ +public class BasicOzoneClientAdapterImpl implements OzoneClientAdapter { + + static final Logger LOG = + LoggerFactory.getLogger(BasicOzoneClientAdapterImpl.class); + + private OzoneClient ozoneClient; + private ObjectStore objectStore; + private OzoneVolume volume; + private OzoneBucket bucket; + private ReplicationType replicationType; + private ReplicationFactor replicationFactor; + private boolean securityEnabled; + + /** + * Create new OzoneClientAdapter implementation. + * + * @param volumeStr Name of the volume to use. + * @param bucketStr Name of the bucket to use + * @throws IOException In case of a problem. + */ + public BasicOzoneClientAdapterImpl(String volumeStr, String bucketStr) + throws IOException { +this(createConf(), volumeStr, bucketStr); + } + + private static OzoneConfiguration createConf() { +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf = new OzoneConfiguration(); +Thread.currentThread().setContextClassLoader(contextClassLoader); +return conf; + } + + public BasicOzoneClientAdapterImpl(OzoneConfiguration conf, String volumeStr, + String bucketStr) + throws IOException { +this(null, -1, conf, volumeStr, bucketStr); + } + + public BasicOzoneClientAdapterImpl(String omHost, int omPort, + Configuration hadoopConf, String volumeStr, String bucketStr) + throws IOException { + +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf; +if (hadoopConf instanceof OzoneConfiguration) { + conf = (OzoneConfiguration) hadoopConf; +} el
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220830&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220830 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270612456 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java ## @@ -0,0 +1,371 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.ozone; + +import java.io.IOException; +import java.io.InputStream; +import java.net.URI; +import java.util.HashMap; +import java.util.Iterator; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.hdds.client.ReplicationFactor; +import org.apache.hadoop.hdds.client.ReplicationType; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.security.x509.SecurityConfig; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.ozone.OzoneConfigKeys; +import org.apache.hadoop.ozone.client.ObjectStore; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; +import org.apache.hadoop.ozone.client.OzoneKey; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.client.io.OzoneOutputStream; +import org.apache.hadoop.ozone.security.OzoneTokenIdentifier; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenRenewer; + +import org.apache.commons.lang3.StringUtils; +import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Basic Implementation of the OzoneFileSystem calls. + * + * This is the minimal version which doesn't include any statistics. + * + * For full featured version use OzoneClientAdapterImpl. + */ +public class BasicOzoneClientAdapterImpl implements OzoneClientAdapter { + + static final Logger LOG = + LoggerFactory.getLogger(BasicOzoneClientAdapterImpl.class); + + private OzoneClient ozoneClient; + private ObjectStore objectStore; + private OzoneVolume volume; + private OzoneBucket bucket; + private ReplicationType replicationType; + private ReplicationFactor replicationFactor; + private boolean securityEnabled; + + /** + * Create new OzoneClientAdapter implementation. + * + * @param volumeStr Name of the volume to use. + * @param bucketStr Name of the bucket to use + * @throws IOException In case of a problem. + */ + public BasicOzoneClientAdapterImpl(String volumeStr, String bucketStr) + throws IOException { +this(createConf(), volumeStr, bucketStr); + } + + private static OzoneConfiguration createConf() { +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf = new OzoneConfiguration(); +Thread.currentThread().setContextClassLoader(contextClassLoader); +return conf; + } + + public BasicOzoneClientAdapterImpl(OzoneConfiguration conf, String volumeStr, + String bucketStr) + throws IOException { +this(null, -1, conf, volumeStr, bucketStr); + } + + public BasicOzoneClientAdapterImpl(String omHost, int omPort, + Configuration hadoopConf, String volumeStr, String bucketStr) + throws IOException { + +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf; +if (hadoopConf instanceof OzoneConfiguration) { + conf = (OzoneConfiguration) hadoopConf; +} el
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220832&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220832 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270612301 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java ## @@ -0,0 +1,371 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.ozone; + +import java.io.IOException; +import java.io.InputStream; +import java.net.URI; +import java.util.HashMap; +import java.util.Iterator; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.hdds.client.ReplicationFactor; +import org.apache.hadoop.hdds.client.ReplicationType; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.security.x509.SecurityConfig; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.ozone.OzoneConfigKeys; +import org.apache.hadoop.ozone.client.ObjectStore; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; +import org.apache.hadoop.ozone.client.OzoneKey; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.client.io.OzoneOutputStream; +import org.apache.hadoop.ozone.security.OzoneTokenIdentifier; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenRenewer; + +import org.apache.commons.lang3.StringUtils; +import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Basic Implementation of the OzoneFileSystem calls. + * + * This is the minimal version which doesn't include any statistics. + * + * For full featured version use OzoneClientAdapterImpl. + */ +public class BasicOzoneClientAdapterImpl implements OzoneClientAdapter { + + static final Logger LOG = + LoggerFactory.getLogger(BasicOzoneClientAdapterImpl.class); + + private OzoneClient ozoneClient; + private ObjectStore objectStore; + private OzoneVolume volume; + private OzoneBucket bucket; + private ReplicationType replicationType; + private ReplicationFactor replicationFactor; + private boolean securityEnabled; + + /** + * Create new OzoneClientAdapter implementation. + * + * @param volumeStr Name of the volume to use. + * @param bucketStr Name of the bucket to use + * @throws IOException In case of a problem. + */ + public BasicOzoneClientAdapterImpl(String volumeStr, String bucketStr) + throws IOException { +this(createConf(), volumeStr, bucketStr); + } + + private static OzoneConfiguration createConf() { +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf = new OzoneConfiguration(); +Thread.currentThread().setContextClassLoader(contextClassLoader); +return conf; + } + + public BasicOzoneClientAdapterImpl(OzoneConfiguration conf, String volumeStr, + String bucketStr) + throws IOException { +this(null, -1, conf, volumeStr, bucketStr); + } + + public BasicOzoneClientAdapterImpl(String omHost, int omPort, + Configuration hadoopConf, String volumeStr, String bucketStr) + throws IOException { + +ClassLoader contextClassLoader = +Thread.currentThread().getContextClassLoader(); +Thread.currentThread().setContextClassLoader(null); +OzoneConfiguration conf; +if (hadoopConf instanceof OzoneConfiguration) { + conf = (OzoneConfiguration) hadoopConf; +} el
[jira] [Work logged] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes
[ https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=220834&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220834 ] ASF GitHub Bot logged work on HDDS-1333: Author: ASF GitHub Bot Created on: 30/Mar/19 04:33 Start Date: 30/Mar/19 04:33 Worklog Time Spent: 10m Work Description: ajayydv commented on pull request #653: HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes URL: https://github.com/apache/hadoop/pull/653#discussion_r270611312 ## File path: hadoop-ozone/dist/src/main/compose/ozonefs/docker-compose.yaml ## @@ -49,21 +49,53 @@ services: environment: ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION command: ["/opt/hadoop/bin/ozone","scm"] - hadoop3: + hadoop32: Review comment: Shall we separate the hadoop 2 and 3 compose files first class (i.e create two separate compose dirs for them)? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220834) Time Spent: 1h 40m (was: 1.5h) > OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security > classes > - > > Key: HDDS-1333 > URL: https://issues.apache.org/jira/browse/HDDS-1333 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > The current ozonefs compatibility layer is broken by: HDDS-1299. > The spark jobs (including hadoop 2.7) can't be executed any more: > {code} > 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered > StateStoreCoordinator endpoint > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/hadoop/crypto/key/KeyProviderTokenIssuer > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) > at > org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) > at > org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332) > at > org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) > at > org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) > at > org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757) > at > org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724) > at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.Nat
[jira] [Commented] (HDFS-14400) Namenode ExpiredHeartbeats metric
[ https://issues.apache.org/jira/browse/HDFS-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805604#comment-16805604 ] Hadoop QA commented on HDFS-14400: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 11s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStats.expiredHeartbeats; locked 50% of time Unsynchronized access at DatanodeStats.java:50% of time Unsynchronized access at DatanodeStats.java:[line 108] | | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.datanode.TestBPOfferService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14400 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964259/HDFS-14400-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a981af366cfd 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d9e9e56 | | maven | version: Apache Maven 3.3.9 | | D
[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode
[ https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220819&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220819 ] ASF GitHub Bot logged work on HDDS-1255: Author: ASF GitHub Bot Created on: 30/Mar/19 03:07 Start Date: 30/Mar/19 03:07 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-478200729 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 0 | Docker mode activated. | | -1 | patch | 6 | https://github.com/apache/hadoop/pull/632 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/632 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/10/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220819) Time Spent: 6.5h (was: 6h 20m) > Refactor ozone acceptance test to allow run in secure mode > -- > > Key: HDDS-1255 > URL: https://issues.apache.org/jira/browse/HDDS-1255 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: pull-request-available > Time Spent: 6.5h > Remaining Estimate: 0h > > Refactor ozone acceptance test to allow run in secure mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM
[ https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805598#comment-16805598 ] Yiqun Lin commented on HDDS-1189: - The latest patch looks great now, thanks [~swagle]. Hi [~avijayan], would you mind doing a double check for this? > Recon Aggregate DB schema and ORM > - > > Key: HDDS-1189 > URL: https://issues.apache.org/jira/browse/HDDS-1189 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, > HDDS-1189.03.patch, HDDS-1189.04.patch > > > _Objectives_ > - Define V1 of the db schema for recon service > - The current proposal is to use jOOQ as the ORM for SQL interaction. For two > main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, > b) Allows code to schema and schema to code seamless transition, critical for > creating DDL through the code and unit testing across versions of the > application. > - Add e2e unit tests suite for Recon entities, created based on the design doc -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805590#comment-16805590 ] Hadoop QA commented on HDDS-1288: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 20s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 45s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.hdds.scm.pipeline.TestNodeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2613/artifact/out/Dockerfile | | JIRA Issue | HDDS-1288 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964258/HDDS-1288.02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 803017134a0f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / d9e9e56 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2613/artifact/out/patch-unit-hadoop-hdds.txt
[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations
[ https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-14316: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-13891 Status: Resolved (was: Patch Available) Committed. Thanx [~elgoiri] for the contribution. > RBF: Support unavailable subclusters for mount points with multiple > destinations > > > Key: HDFS-14316 > URL: https://issues.apache.org/jira/browse/HDFS-14316 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14316-HDFS-13891.000.patch, > HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, > HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, > HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, > HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, > HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, > HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, > HDFS-14316-HDFS-13891.013.patch, HDFS-14316-HDFS-13891.014.patch, > HDFS-14316-HDFS-13891.015.patch > > > Currently mount points with multiple destinations (e.g., HASH_ALL) fail > writes when the destination subcluster is down. We need an option to allow > writing in other subclusters when one is down. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations
[ https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805585#comment-16805585 ] Ayush Saxena commented on HDFS-14316: - +1, Committing Shortly!!! > RBF: Support unavailable subclusters for mount points with multiple > destinations > > > Key: HDFS-14316 > URL: https://issues.apache.org/jira/browse/HDFS-14316 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-14316-HDFS-13891.000.patch, > HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, > HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, > HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, > HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, > HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, > HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, > HDFS-14316-HDFS-13891.013.patch, HDFS-14316-HDFS-13891.014.patch, > HDFS-14316-HDFS-13891.015.patch > > > Currently mount points with multiple destinations (e.g., HASH_ALL) fail > writes when the destination subcluster is down. We need an option to allow > writing in other subclusters when one is down. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14400) Namenode ExpiredHeartbeats metric
[ https://issues.apache.org/jira/browse/HDFS-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Palanisamy updated HDFS-14400: -- Attachment: HDFS-14400-001.patch Status: Patch Available (was: Open) > Namenode ExpiredHeartbeats metric > - > > Key: HDFS-14400 > URL: https://issues.apache.org/jira/browse/HDFS-14400 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: Karthik Palanisamy >Assignee: Karthik Palanisamy >Priority: Minor > Attachments: HDFS-14400-001.patch > > > Noticed incorrect value in ExpiredHeartbeats metrics under namenode JMX. > We will increment ExpiredHeartbeats count when Datanode is dead but somehow > we missed to decrement when datanode is alive back. > {code} > { "name" : "Hadoop:service=NameNode,name=FSNamesystem", "modelerType" : > "FSNamesystem", "tag.Context" : "dfs", "tag.TotalSyncTimes" : "7 ", > "tag.HAState" : "active", ... "ExpiredHeartbeats" : 2, ... } > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14400) Namenode ExpiredHeartbeats metric
Karthik Palanisamy created HDFS-14400: - Summary: Namenode ExpiredHeartbeats metric Key: HDFS-14400 URL: https://issues.apache.org/jira/browse/HDFS-14400 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 3.1.2 Reporter: Karthik Palanisamy Assignee: Karthik Palanisamy Noticed incorrect value in ExpiredHeartbeats metrics under namenode JMX. We will increment ExpiredHeartbeats count when Datanode is dead but somehow we missed to decrement when datanode is alive back. {code} { "name" : "Hadoop:service=NameNode,name=FSNamesystem", "modelerType" : "FSNamesystem", "tag.Context" : "dfs", "tag.TotalSyncTimes" : "7 ", "tag.HAState" : "active", ... "ExpiredHeartbeats" : 2, ... } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805572#comment-16805572 ] Siddharth Wagle commented on HDDS-1288: --- 02 => whitespace fix. > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch, HDDS-1288.02.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Attachment: HDDS-1288.02.patch > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch, HDDS-1288.02.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805564#comment-16805564 ] Hadoop QA commented on HDDS-1288: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 57s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 6s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager | | | hadoop.ozone.container.TestContainerReplication | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2612/artifact/out/Dockerfile | | JIRA Issue | HDDS-1288 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964254/HDDS-1288.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 26bfd3cd4284 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / d9e9e56 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | whitespace | https://bu
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Status: Patch Available (was: Open) Removed Thread.sleep and triggered node report processing directly on the SCMNodeManager. [~bharatviswa] Could you please take a look? > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Attachment: HDDS-1288.01.patch > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Attachment: (was: HDDS-1288.01.patch) > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Attachment: HDDS-1288.01.patch > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Attachment: (was: HDDS-1288.01.patch) > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing
[ https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1288: -- Attachment: HDDS-1288.01.patch > SCM - Failing test on trunk that waits for HB report processing > --- > > Key: HDDS-1288 > URL: https://issues.apache.org/jira/browse/HDDS-1288 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1288.01.patch > > > Test failing due to dependence on Thread.sleep and expecting heartbeat being > processed in time. > {code} > Error Message > Expected exactly one metric for name HealthyNodes expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Expected exactly one metric for name HealthyNodes > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275) > at > org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157) > at > org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) > at > org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations
[ https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805492#comment-16805492 ] Íñigo Goiri commented on HDFS-14316: The latest patch takes care of the failed unit tests. Let me know if there's anything else. > RBF: Support unavailable subclusters for mount points with multiple > destinations > > > Key: HDFS-14316 > URL: https://issues.apache.org/jira/browse/HDFS-14316 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-14316-HDFS-13891.000.patch, > HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, > HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, > HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, > HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, > HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, > HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, > HDFS-14316-HDFS-13891.013.patch, HDFS-14316-HDFS-13891.014.patch, > HDFS-14316-HDFS-13891.015.patch > > > Currently mount points with multiple destinations (e.g., HASH_ALL) fail > writes when the destination subcluster is down. We need an option to allow > writing in other subclusters when one is down. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.
[ https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=220756&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220756 ] ASF GitHub Bot logged work on HDDS-1260: Author: ASF GitHub Bot Created on: 29/Mar/19 21:42 Start Date: 29/Mar/19 21:42 Worklog Time Spent: 10m Work Description: vivekratnavel commented on issue #643: HDDS-1260. Create Recon Server lifecycle integration with Ozone. URL: https://github.com/apache/hadoop/pull/643#issuecomment-478158874 Robot tests will be added later as part of - https://issues.apache.org/jira/browse/HDDS-1261 Tested build manually and it succeeds. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220756) Time Spent: 2h 20m (was: 2h 10m) > Create Recon Server lifecyle integration with Ozone. > > > Key: HDDS-1260 > URL: https://issues.apache.org/jira/browse/HDDS-1260 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Vivek Ratnavel Subramanian >Priority: Critical > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > * Create the lifecycle scripts (start/stop) for Recon Server along with Shell > interface like the other components. > * Verify configurations are being picked up by Recon Server on startup. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14399) Backport HDFS-10536 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805454#comment-16805454 ] Chao Sun commented on HDFS-14399: - Failed tests are passing on my laptop. [~vagarychen]: can you take a look at this? Thanks. > Backport HDFS-10536 to branch-2 > --- > > Key: HDFS-14399 > URL: https://issues.apache.org/jira/browse/HDFS-14399 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14399-branch-2.000.patch > > > As multi-SBN feature is already backported to branch-2, this is a follow-up > to backport HDFS-10536. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM
[ https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805452#comment-16805452 ] Hadoop QA commented on HDDS-1189: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 53s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 11s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 44s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 26s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager | | | hadoop.ozone.ozShell.TestOzoneShell | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2611/artifact/out/Dockerfile | | JIRA Issue | HDDS-1189 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964241/HDDS-1189.04.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findb
[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations
[ https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805439#comment-16805439 ] Hadoop QA commented on HDFS-14316: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} HDFS-13891 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 57s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} HDFS-13891 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 37s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}158m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14316 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964233/HDFS-14316-HDFS-13891.015.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml | | uname | Linux 9d3c6041f
[jira] [Work logged] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?focusedWorklogId=220719&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220719 ] ASF GitHub Bot logged work on HDDS-1358: Author: ASF GitHub Bot Created on: 29/Mar/19 20:32 Start Date: 29/Mar/19 20:32 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #668: HDDS-1358 : Recon Server REST API not working as expected. URL: https://github.com/apache/hadoop/pull/668#issuecomment-478140329 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 49 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for branch | | +1 | mvninstall | 1151 | trunk passed | | +1 | compile | 101 | trunk passed | | +1 | checkstyle | 27 | trunk passed | | +1 | mvnsite | 53 | trunk passed | | +1 | shadedclient | 798 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 76 | trunk passed | | +1 | javadoc | 39 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for patch | | +1 | mvninstall | 65 | the patch passed | | +1 | compile | 99 | the patch passed | | +1 | javac | 99 | the patch passed | | +1 | checkstyle | 25 | the patch passed | | +1 | mvnsite | 47 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 811 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 84 | the patch passed | | +1 | javadoc | 36 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 39 | s3gateway in the patch passed. | | +1 | unit | 43 | ozone-recon in the patch passed. | | +1 | asflicense | 27 | The patch does not generate ASF License warnings. | | | | 3672 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-668/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/668 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 9e6bd80af775 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 56f1e13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-668/2/testReport/ | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/s3gateway hadoop-ozone/ozone-recon U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-668/2/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220719) Time Spent: 0.5h (was: 20m) > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch, HDDS-1358-001.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?focusedWorklogId=220717&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220717 ] ASF GitHub Bot logged work on HDDS-1358: Author: ASF GitHub Bot Created on: 29/Mar/19 20:28 Start Date: 29/Mar/19 20:28 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #668: HDDS-1358 : Recon Server REST API not working as expected. URL: https://github.com/apache/hadoop/pull/668#issuecomment-478139227 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 49 | Maven dependency ordering for branch | | +1 | mvninstall | 1175 | trunk passed | | +1 | compile | 100 | trunk passed | | +1 | checkstyle | 27 | trunk passed | | +1 | mvnsite | 61 | trunk passed | | +1 | shadedclient | 789 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 72 | trunk passed | | +1 | javadoc | 40 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 20 | Maven dependency ordering for patch | | +1 | mvninstall | 61 | the patch passed | | +1 | compile | 109 | the patch passed | | +1 | javac | 109 | the patch passed | | +1 | checkstyle | 26 | the patch passed | | +1 | mvnsite | 53 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 3 | The patch has no ill-formed XML file. | | +1 | shadedclient | 784 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 86 | the patch passed | | +1 | javadoc | 36 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 36 | s3gateway in the patch passed. | | +1 | unit | 38 | ozone-recon in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3703 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-668/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/668 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 9d53a6099cc6 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 56f1e13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-668/1/testReport/ | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/s3gateway hadoop-ozone/ozone-recon U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-668/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220717) Time Spent: 20m (was: 10m) > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch, HDDS-1358-001.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.
[ https://issues.apache.org/jira/browse/HDDS-1330?focusedWorklogId=220708&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220708 ] ASF GitHub Bot logged work on HDDS-1330: Author: ASF GitHub Bot Created on: 29/Mar/19 20:14 Start Date: 29/Mar/19 20:14 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #669: HDDS-1330 : Add a docker compose for Ozone deployment with Recon. URL: https://github.com/apache/hadoop/pull/669#issuecomment-478135290 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 23 | Docker mode activated. | ||| _ Prechecks _ | | 0 | yamllint | 0 | yamllint was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 981 | trunk passed | | +1 | compile | 23 | trunk passed | | +1 | mvnsite | 22 | trunk passed | | +1 | shadedclient | 641 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 20 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 23 | dist in the patch failed. | | +1 | compile | 17 | the patch passed | | +1 | javac | 16 | the patch passed | | +1 | mvnsite | 24 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | shelldocs | 12 | There were no new shelldocs issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 734 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 19 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 21 | dist in the patch passed. | | +1 | asflicense | 27 | The patch does not generate ASF License warnings. | | | | 2722 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-669/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/669 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient yamllint shellcheck shelldocs | | uname | Linux 7e09f34a905a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 56f1e13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-669/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-669/1/testReport/ | | Max. process+thread count | 422 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-669/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220708) Time Spent: 20m (was: 10m) > Add a docker compose for Ozone deployment with Recon. > - > > Key: HDDS-1330 > URL: https://issues.apache.org/jira/browse/HDDS-1330 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1330-000.patch > > Time Spent: 20m > Remaining Estimate: 0h > > * Add a docker compose for Ozone deployment with Recon. > * Test out Recon container key service. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.
[ https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805412#comment-16805412 ] Hadoop QA commented on HDDS-1330: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue} 0m 0s{color} | {color:blue} yamllint was not available. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 29s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 15s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2610/artifact/out/Dockerfile | | JIRA Issue | HDDS-1330 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964238/HDDS-1330-000.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient yamllint shellcheck shelldocs | | uname | Linux 01d71d2ae073 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 56f1e13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2610/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2610/testReport/ | | Max
[jira] [Updated] (HDDS-1337) HandleGroupMismatchException in OzoneClient
[ https://issues.apache.org/jira/browse/HDDS-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDDS-1337: --- Priority: Blocker (was: Major) > HandleGroupMismatchException in OzoneClient > --- > > Key: HDDS-1337 > URL: https://issues.apache.org/jira/browse/HDDS-1337 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Blocker > Labels: Blocker > Fix For: 0.4.0 > > Attachments: HDDS-1337.000.patch, HDDS-1337.001.patch > > > If a pipeline gets destroyed in ozone client, ozone client may hit > GroupMismatchException from Ratis. In cases as such, client should exclude > the pipeline and retry write to a different block. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805399#comment-16805399 ] Hadoop QA commented on HDDS-1358: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 26s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 57s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2609/artifact/out/Dockerfile | | JIRA Issue | HDDS-1358 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964236/HDDS-1358-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname |
[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.
[ https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=220693&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220693 ] ASF GitHub Bot logged work on HDDS-1260: Author: ASF GitHub Bot Created on: 29/Mar/19 19:53 Start Date: 29/Mar/19 19:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #643: HDDS-1260. Create Recon Server lifecycle integration with Ozone. URL: https://github.com/apache/hadoop/pull/643#issuecomment-478129064 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 27 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 47 | Maven dependency ordering for branch | | +1 | mvninstall | 970 | trunk passed | | +1 | compile | 103 | trunk passed | | +1 | checkstyle | 26 | trunk passed | | +1 | mvnsite | 95 | trunk passed | | +1 | shadedclient | 625 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/dist | | +1 | findbugs | 93 | trunk passed | | +1 | javadoc | 77 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 18 | dist in the patch failed. | | +1 | compile | 90 | the patch passed | | +1 | javac | 90 | the patch passed | | +1 | checkstyle | 22 | the patch passed | | +1 | mvnsite | 72 | the patch passed | | +1 | shellcheck | 24 | There were no new shellcheck issues. | | +1 | shelldocs | 11 | There were no new shelldocs issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 695 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/dist | | +1 | findbugs | 106 | the patch passed | | +1 | javadoc | 68 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 34 | common in the patch passed. | | +1 | unit | 32 | ozone-recon in the patch passed. | | +1 | unit | 20 | dist in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3521 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-643/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/643 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall shadedclient xml findbugs checkstyle | | uname | Linux 69eb151f6f53 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 56f1e13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-643/8/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-643/8/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/ozone-recon hadoop-ozone/dist U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-643/8/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220693) Time Spent: 2h 10m (was: 2h) > Create Recon Server lifecyle integration with Ozone. > > > Key: HDDS-1260 > URL: https://issues.apache.org/jira/browse/HDDS-1260 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >
[jira] [Commented] (HDDS-1312) Add more unit tests to verify BlockOutputStream functionalities
[ https://issues.apache.org/jira/browse/HDDS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805380#comment-16805380 ] Hadoop QA commented on HDDS-1312: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 10s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 14s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.om.TestOzoneManagerHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2608/artifact/out/Dockerfile | | JIRA Issue | HDDS-1312 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964234/HDDS-1312.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 18ec8d2c515d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linu
[jira] [Commented] (HDFS-14399) Backport HDFS-10536 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805377#comment-16805377 ] Hadoop QA commented on HDFS-14399: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 58s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} branch-2 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:da67579 | | JIRA Issue | HDFS-14399 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964220/HDFS-14399-branch-2.000.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f788bb9dec56 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer
[ https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805371#comment-16805371 ] Uma Maheswara Rao G commented on HDFS-14355: Thanks [~PhiloHe] for providing patch. It mostly looks good to me. Any further comments from others? Otherwise I will move ahead for commit. Thanks > Implement HDFS cache on SCM by using pure java mapped byte buffer > - > > Key: HDFS-14355 > URL: https://issues.apache.org/jira/browse/HDFS-14355 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, > HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, > HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, > HDFS-14355.008.patch > > > This task is to implement the caching to persistent memory using pure > {{java.nio.MappedByteBuffer}}, which could be useful in case native support > isn't available or convenient in some environments or platforms. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.
[ https://issues.apache.org/jira/browse/HDDS-1330?focusedWorklogId=220682&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220682 ] ASF GitHub Bot logged work on HDDS-1330: Author: ASF GitHub Bot Created on: 29/Mar/19 19:28 Start Date: 29/Mar/19 19:28 Worklog Time Spent: 10m Work Description: avijayanhwx commented on pull request #669: HDDS-1330 : Add a docker compose for Ozone deployment with Recon. URL: https://github.com/apache/hadoop/pull/669 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220682) Time Spent: 10m Remaining Estimate: 0h > Add a docker compose for Ozone deployment with Recon. > - > > Key: HDDS-1330 > URL: https://issues.apache.org/jira/browse/HDDS-1330 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1330-000.patch > > Time Spent: 10m > Remaining Estimate: 0h > > * Add a docker compose for Ozone deployment with Recon. > * Test out Recon container key service. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.
[ https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-1330: - Labels: pull-request-available (was: ) > Add a docker compose for Ozone deployment with Recon. > - > > Key: HDDS-1330 > URL: https://issues.apache.org/jira/browse/HDDS-1330 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1330-000.patch > > > * Add a docker compose for Ozone deployment with Recon. > * Test out Recon container key service. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1189) Recon Aggregate DB schema and ORM
[ https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1189: -- Attachment: HDDS-1189.04.patch > Recon Aggregate DB schema and ORM > - > > Key: HDDS-1189 > URL: https://issues.apache.org/jira/browse/HDDS-1189 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, > HDDS-1189.03.patch, HDDS-1189.04.patch > > > _Objectives_ > - Define V1 of the db schema for recon service > - The current proposal is to use jOOQ as the ORM for SQL interaction. For two > main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, > b) Allows code to schema and schema to code seamless transition, critical for > creating DDL through the code and unit testing across versions of the > application. > - Add e2e unit tests suite for Recon entities, created based on the design doc -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM
[ https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805369#comment-16805369 ] Siddharth Wagle commented on HDDS-1189: --- 04 => fixed the ozone-default.xml and whitespace errors. > Recon Aggregate DB schema and ORM > - > > Key: HDDS-1189 > URL: https://issues.apache.org/jira/browse/HDDS-1189 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, > HDDS-1189.03.patch, HDDS-1189.04.patch > > > _Objectives_ > - Define V1 of the db schema for recon service > - The current proposal is to use jOOQ as the ORM for SQL interaction. For two > main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, > b) Allows code to schema and schema to code seamless transition, critical for > creating DDL through the code and unit testing across versions of the > application. > - Add e2e unit tests suite for Recon entities, created based on the design doc -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-1358: - Labels: pull-request-available (was: ) > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch, HDDS-1358-001.patch > > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?focusedWorklogId=220681&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220681 ] ASF GitHub Bot logged work on HDDS-1358: Author: ASF GitHub Bot Created on: 29/Mar/19 19:25 Start Date: 29/Mar/19 19:25 Worklog Time Spent: 10m Work Description: avijayanhwx commented on pull request #668: HDDS-1358 : Recon Server REST API not working as expected. URL: https://github.com/apache/hadoop/pull/668 - Fixed the Guice Jersey-hk2 integration. - Added blocks to KeyMetadata - Minor fixes/improvements. Manually tested on single node cluster. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220681) Time Spent: 10m Remaining Estimate: 0h > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch, HDDS-1358-001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1067) freon run on client gets hung when two of the datanodes are down in 3 datanode cluster
[ https://issues.apache.org/jira/browse/HDDS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805353#comment-16805353 ] Hadoop QA commented on HDDS-1067: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange} 0m 1s{color} | {color:orange} Error running pylint. Please check pylint stderr files. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange} 0m 2s{color} | {color:orange} Error running pylint. Please check pylint stderr files. {color} | | {color:green}+1{color} | {color:green} pylint {color} | {color:green} 0m 3s{color} | {color:green} There were no new pylint issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 34s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 15s{color} | {color:green} hadoop-ozone in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2606/artifact/out/Dockerfile | | JIRA Issue | HDDS-1067 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964223/HDDS-1067.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient pylint | | uname | Linux 68f55547eb02 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 56f1e13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | pylint | https://builds.apache.org/job/PreCommit-HDDS-Build/2606/artifact/out/branch-p
[jira] [Comment Edited] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM
[ https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805347#comment-16805347 ] Bharat Viswanadham edited comment on HDDS-1347 at 3/29/19 7:05 PM: --- Will create a new Jira to handle getDelegationToken() I think to make this work it requires client and OM end changes. As the token service name we use ipaddress,port. When HA we need to change the dtService. And also with the current approach how getRemoteUser() call will work with HA, when the request is passed to non-leader OM and how that works also need to taken care. was (Author: bharatviswa): Will create a new Jira to handle getDelegationToken() I think to make this work it requires client and OM end changes. As the token service name we use ipaddress,port. When HA we need to change the dtService. And also with the current approach hot getRemoteUser() call will work with HA, when the request is passed to non-leader OM and how that works also need to taken care. > In OM HA getS3Secret call Should happen only leader OM > -- > > Key: HDDS-1347 > URL: https://issues.apache.org/jira/browse/HDDS-1347 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In Om HA getS3Secret should happen only leader OM. > > > The reason is similar to initiateMultipartUpload. For more info refer > HDDS-1319 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM
[ https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-1347: - Description: In Om HA getS3Secret should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 was: In Om HA getS3Secret should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 > In OM HA getS3Secret call Should happen only leader OM > -- > > Key: HDDS-1347 > URL: https://issues.apache.org/jira/browse/HDDS-1347 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In Om HA getS3Secret should happen only leader OM. > > > The reason is similar to initiateMultipartUpload. For more info refer > HDDS-1319 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1359) In OM HA getDelegation call Should happen only leader OM
[ https://issues.apache.org/jira/browse/HDDS-1359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-1359: - Description: In Om HA getDelegationToken should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 was: In Om HA getS3Secret should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 > In OM HA getDelegation call Should happen only leader OM > - > > Key: HDDS-1359 > URL: https://issues.apache.org/jira/browse/HDDS-1359 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In Om HA getDelegationToken should happen only leader OM. > > The reason is similar to initiateMultipartUpload. For more info refer > HDDS-1319 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM
[ https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805347#comment-16805347 ] Bharat Viswanadham commented on HDDS-1347: -- Will create a new Jira to handle getDelegationToken() I think to make this work it requires client and OM end changes. As the token service name we use ipaddress,port. When HA we need to change the dtService. And also with the current approach hot getRemoteUser() call will work with HA, when the request is passed to non-leader OM and how that works also need to taken care. > In OM HA getS3Secret call Should happen only leader OM > -- > > Key: HDDS-1347 > URL: https://issues.apache.org/jira/browse/HDDS-1347 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In Om HA getS3Secret should happen only leader OM. > > The reason is similar to initiateMultipartUpload. For more info refer > HDDS-1319 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1359) In OM HA getDelegation call Should happen only leader OM
Bharat Viswanadham created HDDS-1359: Summary: In OM HA getDelegation call Should happen only leader OM Key: HDDS-1359 URL: https://issues.apache.org/jira/browse/HDDS-1359 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham In Om HA getS3Secret should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM
[ https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-1347: - Summary: In OM HA getS3Secret call Should happen only leader OM (was: In OM HA getS3Secret and createDelegationToken call Should happen only leader OM) > In OM HA getS3Secret call Should happen only leader OM > -- > > Key: HDDS-1347 > URL: https://issues.apache.org/jira/browse/HDDS-1347 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In Om HA getS3Secret should happen only leader OM. > > The reason is similar to initiateMultipartUpload. For more info refer > HDDS-1319 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1347) In OM HA getS3Secret and createDelegationToken call Should happen only leader OM
[ https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-1347: - Description: In Om HA getS3Secret should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 was: In Om HA getS3Secret and createDelegationToken should happen only leader OM. The reason is similar to initiateMultipartUpload. For more info refer HDDS-1319 > In OM HA getS3Secret and createDelegationToken call Should happen only leader > OM > > > Key: HDDS-1347 > URL: https://issues.apache.org/jira/browse/HDDS-1347 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > In Om HA getS3Secret should happen only leader OM. > > The reason is similar to initiateMultipartUpload. For more info refer > HDDS-1319 > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=220663&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220663 ] ASF GitHub Bot logged work on HDDS-1211: Author: ASF GitHub Bot Created on: 29/Mar/19 18:52 Start Date: 29/Mar/19 18:52 Worklog Time Spent: 10m Work Description: arp7 commented on pull request #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run URL: https://github.com/apache/hadoop/pull/543#discussion_r270536031 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestScmChillMode.java ## @@ -225,36 +210,36 @@ public void testIsScmInChillModeAndForceExit() throws Exception { } - @Test(timeout=300_000) + @Test(timeout = 300_000) public void testSCMChillMode() throws Exception { -MiniOzoneCluster.Builder clusterBuilder = MiniOzoneCluster.newBuilder(conf) -.setHbInterval(1000) -.setNumDatanodes(3) Review comment: Looks like the test was previously using 3 DNs and now it will use just 1 DN. Correct? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220663) Time Spent: 1.5h (was: 1h 20m) > Test SCMChillMode failing randomly in Jenkins run > - > > Key: HDDS-1211 > URL: https://issues.apache.org/jira/browse/HDDS-1211 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available, pushed-to-craterlake > Time Spent: 1.5h > Remaining Estimate: 0h > > java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at > java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) > at > java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) > at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at > org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=220662&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220662 ] ASF GitHub Bot logged work on HDDS-1211: Author: ASF GitHub Bot Created on: 29/Mar/19 18:52 Start Date: 29/Mar/19 18:52 Worklog Time Spent: 10m Work Description: arp7 commented on pull request #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run URL: https://github.com/apache/hadoop/pull/543#discussion_r270535305 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestScmChillMode.java ## @@ -158,25 +154,21 @@ public void testChillModeOperations() throws Exception { cluster.stop(); Review comment: Let's move the `cluster.stop()` call to an `@After` method. Actually it looks like you already have cluster.shutdown(). Why do we need a stop call here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220662) Time Spent: 1h 20m (was: 1h 10m) > Test SCMChillMode failing randomly in Jenkins run > - > > Key: HDDS-1211 > URL: https://issues.apache.org/jira/browse/HDDS-1211 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available, pushed-to-craterlake > Time Spent: 1h 20m > Remaining Estimate: 0h > > java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at > java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) > at > java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) > at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at > org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon
[ https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=220659&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220659 ] ASF GitHub Bot logged work on HDDS-1340: Author: ASF GitHub Bot Created on: 29/Mar/19 18:49 Start Date: 29/Mar/19 18:49 Worklog Time Spent: 10m Work Description: avijayanhwx commented on issue #648: HDDS-1340. Add List Containers API for Recon URL: https://github.com/apache/hadoop/pull/648#issuecomment-478110153 LGTM +1. Please test the API with the changes from HDDS-1260 and HDDS-1358 and then commit it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220659) Time Spent: 2h 40m (was: 2.5h) > Add List Containers API for Recon > - > > Key: HDDS-1340 > URL: https://issues.apache.org/jira/browse/HDDS-1340 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Vivek Ratnavel Subramanian >Assignee: Vivek Ratnavel Subramanian >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > Recon server should support "/containers" API that lists all the containers -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.
[ https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-1330: Attachment: HDDS-1330-000.patch Status: Patch Available (was: In Progress) While doing the docker setup, I found some issues in the Recon Service layer that is being addressed through HDDS-1358. > Add a docker compose for Ozone deployment with Recon. > - > > Key: HDDS-1330 > URL: https://issues.apache.org/jira/browse/HDDS-1330 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1330-000.patch > > > * Add a docker compose for Ozone deployment with Recon. > * Test out Recon container key service. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=220655&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220655 ] ASF GitHub Bot logged work on HDDS-1211: Author: ASF GitHub Bot Created on: 29/Mar/19 18:40 Start Date: 29/Mar/19 18:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run URL: https://github.com/apache/hadoop/pull/543#issuecomment-478107339 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 27 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 970 | trunk passed | | +1 | compile | 29 | trunk passed | | +1 | checkstyle | 22 | trunk passed | | +1 | mvnsite | 32 | trunk passed | | +1 | shadedclient | 708 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 0 | trunk passed | | +1 | javadoc | 19 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 32 | the patch passed | | +1 | compile | 25 | the patch passed | | +1 | javac | 25 | the patch passed | | +1 | checkstyle | 16 | the patch passed | | +1 | mvnsite | 27 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 736 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 0 | the patch passed | | +1 | javadoc | 17 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 622 | integration-test in the patch failed. | | +1 | asflicense | 32 | The patch does not generate ASF License warnings. | | | | 3392 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-543/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/543 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3a35cf21a324 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 7dc0ecc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/5/artifact/out/patch-unit-hadoop-ozone_integration-test.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/5/testReport/ | | Max. process+thread count | 4284 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/5/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220655) Time Spent: 1h 10m (was: 1h) > Test SCMChillMode failing randomly in Jenkins run > - > > Key: HDDS-1211 > URL: https://issues.apache.org/jira/browse/HDDS-1211 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available, pushed-to-craterlake > Time Spent: 1h 10m > Remaining Estimate: 0h > > java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at > java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) > at > java.util.concurrent.SynchronousQueue$TransferStack.transfer(Synchronou
[jira] [Updated] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-1358: Attachment: HDDS-1358-001.patch Status: Patch Available (was: Open) Rebased patch on trunk. > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch, HDDS-1358-001.patch > > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-1358: Status: Open (was: Patch Available) > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch, HDDS-1358-001.patch > > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10687) Federation Membership State Store internal API
[ https://issues.apache.org/jira/browse/HDFS-10687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805313#comment-16805313 ] Fengnan Li commented on HDFS-10687: --- [~jakace] [~elgoiri] I am trying to understand the EXPIRE process in MembershipState in router and please correct me if I am understanding wrong. * MembershipState and RouterState seem always using caching through CachedRecordStore and loading their data from the actual StateStoreDriver. * StateStoreDriver doesn't actually run threads directly to update the state of membership to EXPIRE, instead, it relies on when the loadCache is called, the caching will check the expiration and update the state store. Thanks! > Federation Membership State Store internal API > -- > > Key: HDFS-10687 > URL: https://issues.apache.org/jira/browse/HDFS-10687 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Íñigo Goiri >Assignee: Jason Kace >Priority: Major > Fix For: 2.9.0, 3.0.0 > > Attachments: HDFS-10467-HDFS-10687-001.patch, > HDFS-10687-HDFS-10467-002.patch, HDFS-10687-HDFS-10467-003.patch, > HDFS-10687-HDFS-10467-004.patch, HDFS-10687-HDFS-10467-005.patch, > HDFS-10687-HDFS-10467-006.patch, HDFS-10687-HDFS-10467-007.patch, > HDFS-10687-HDFS-10467-008.patch > > > The Federation Membership State encapsulates the information about the > Namenodes of each sub-cluster that are participating in Federation. The > information includes addresses for RPC, Web. This information is stored in > the State Store and later used by the Router to find data in the federation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf
[ https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805308#comment-16805308 ] Doroszlai, Attila commented on HDDS-1351: - Thank you [~ajayydv] and [~xyao] for the review, and [~xyao] for committing it. > NoClassDefFoundError when running ozone genconf > --- > > Key: HDDS-1351 > URL: https://issues.apache.org/jira/browse/HDDS-1351 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: build >Affects Versions: 0.4.0 >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Fix For: 0.4.0 > > Attachments: HDDS-1351.001.patch, HDDS-1351.002.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > {{ozone genconf}} fails due to incomplete classpath. > Steps to reproduce: > # [build and run > Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker] > # run {{ozone genconf}} in one of the containers: > {code} > $ ozone genconf /tmp > Exception in thread "main" java.lang.NoClassDefFoundError: > com/sun/xml/bind/v2/model/annotation/AnnotationReader > at java.lang.ClassLoader.defineClass1(Native Method) > ... > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50) > at picocli.CommandLine.execute(CommandLine.java:919) > ... > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68) > Caused by: java.lang.ClassNotFoundException: > com.sun.xml.bind.v2.model.annotation.AnnotationReader > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 36 more > {code} > {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the > {{hadoop-ozone-tools}} classpath. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf
[ https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220649 ] ASF GitHub Bot logged work on HDDS-1351: Author: ASF GitHub Bot Created on: 29/Mar/19 18:24 Start Date: 29/Mar/19 18:24 Worklog Time Spent: 10m Work Description: adoroszlai commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#issuecomment-478101907 Closing, since identical to `trunk` PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220649) Time Spent: 2h 20m (was: 2h 10m) > NoClassDefFoundError when running ozone genconf > --- > > Key: HDDS-1351 > URL: https://issues.apache.org/jira/browse/HDDS-1351 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: build >Affects Versions: 0.4.0 >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Fix For: 0.4.0 > > Attachments: HDDS-1351.001.patch, HDDS-1351.002.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > {{ozone genconf}} fails due to incomplete classpath. > Steps to reproduce: > # [build and run > Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker] > # run {{ozone genconf}} in one of the containers: > {code} > $ ozone genconf /tmp > Exception in thread "main" java.lang.NoClassDefFoundError: > com/sun/xml/bind/v2/model/annotation/AnnotationReader > at java.lang.ClassLoader.defineClass1(Native Method) > ... > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50) > at picocli.CommandLine.execute(CommandLine.java:919) > ... > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68) > Caused by: java.lang.ClassNotFoundException: > com.sun.xml.bind.v2.model.annotation.AnnotationReader > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 36 more > {code} > {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the > {{hadoop-ozone-tools}} classpath. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf
[ https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220650&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220650 ] ASF GitHub Bot logged work on HDDS-1351: Author: ASF GitHub Bot Created on: 29/Mar/19 18:24 Start Date: 29/Mar/19 18:24 Worklog Time Spent: 10m Work Description: adoroszlai commented on pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220650) Time Spent: 2.5h (was: 2h 20m) > NoClassDefFoundError when running ozone genconf > --- > > Key: HDDS-1351 > URL: https://issues.apache.org/jira/browse/HDDS-1351 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: build >Affects Versions: 0.4.0 >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Fix For: 0.4.0 > > Attachments: HDDS-1351.001.patch, HDDS-1351.002.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > {{ozone genconf}} fails due to incomplete classpath. > Steps to reproduce: > # [build and run > Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker] > # run {{ozone genconf}} in one of the containers: > {code} > $ ozone genconf /tmp > Exception in thread "main" java.lang.NoClassDefFoundError: > com/sun/xml/bind/v2/model/annotation/AnnotationReader > at java.lang.ClassLoader.defineClass1(Native Method) > ... > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73) > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50) > at picocli.CommandLine.execute(CommandLine.java:919) > ... > at > org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68) > Caused by: java.lang.ClassNotFoundException: > com.sun.xml.bind.v2.model.annotation.AnnotationReader > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 36 more > {code} > {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the > {{hadoop-ozone-tools}} classpath. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805294#comment-16805294 ] Hadoop QA commented on HDDS-1358: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDDS-1358 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDDS-1358 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964232/HDDS-1358-000.patch | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2607/console | | Powered by | Apache Yetus 0.10.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch > > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer
[ https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805291#comment-16805291 ] Hadoop QA commented on HDFS-14355: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}129m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14355 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964212/HDFS-14355.008.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux eb789d38b149 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7dc0ecc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26545/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26545/testReport/ | | Max. process+thread count | 5309 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
[jira] [Commented] (HDFS-14394) Add -std=c99 / -std=gnu99 to libhdfs compile flags
[ https://issues.apache.org/jira/browse/HDFS-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805292#comment-16805292 ] Eric Yang commented on HDFS-14394: -- What do people think about refactoring Hadoop-hdfs-native-client project into several sub-projects? This will allow passing C_FLAGS more easily to the unrelated projects to set std flags. cmake-maven-plugin can also improve control of building architecture specific binaries instead of hadoop-maven-plugins. > Add -std=c99 / -std=gnu99 to libhdfs compile flags > -- > > Key: HDFS-14394 > URL: https://issues.apache.org/jira/browse/HDFS-14394 > Project: Hadoop HDFS > Issue Type: Task > Components: hdfs-client, libhdfs, native >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > > libhdfs compilation currently does not enforce a minimum required C version. > As of today, the libhdfs build on Hadoop QA works, but when built on a > machine with an outdated gcc / cc version where C89 is the default, > compilation fails due to errors such as: > {code} > /build/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c:106:5: > error: ‘for’ loop initial declarations are only allowed in C99 mode > for (int i = 0; i < numCachedClasses; i++) { > ^ > /build/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c:106:5: > note: use option -std=c99 or -std=gnu99 to compile your code > {code} > We should add the -std=c99 / -std=gnu99 flags to libhdfs compilation so that > we can enforce C99 as the minimum required version. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1337) HandleGroupMismatchException in OzoneClient
[ https://issues.apache.org/jira/browse/HDDS-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805286#comment-16805286 ] Shashikant Banerjee commented on HDDS-1337: --- Patch v1 is rebased on top of HDDS-1312 and Ratis-511. > HandleGroupMismatchException in OzoneClient > --- > > Key: HDDS-1337 > URL: https://issues.apache.org/jira/browse/HDDS-1337 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: Blocker > Fix For: 0.4.0 > > Attachments: HDDS-1337.000.patch, HDDS-1337.001.patch > > > If a pipeline gets destroyed in ozone client, ozone client may hit > GroupMismatchException from Ratis. In cases as such, client should exclude > the pipeline and retry write to a different block. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1312) Add more unit tests to verify BlockOutputStream functionalities
[ https://issues.apache.org/jira/browse/HDDS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-1312: -- Attachment: HDDS-1312.003.patch > Add more unit tests to verify BlockOutputStream functionalities > --- > > Key: HDDS-1312 > URL: https://issues.apache.org/jira/browse/HDDS-1312 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Blocker > Attachments: HDDS-1312.000.patch, HDDS-1312.001.patch, > HDDS-1312.002.patch, HDDS-1312.003.patch > > > This jira aims to add more unit test coverage for BlockOutputStream > functionalities. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1337) HandleGroupMismatchException in OzoneClient
[ https://issues.apache.org/jira/browse/HDDS-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDDS-1337: -- Attachment: HDDS-1337.001.patch > HandleGroupMismatchException in OzoneClient > --- > > Key: HDDS-1337 > URL: https://issues.apache.org/jira/browse/HDDS-1337 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Labels: Blocker > Fix For: 0.4.0 > > Attachments: HDDS-1337.000.patch, HDDS-1337.001.patch > > > If a pipeline gets destroyed in ozone client, ozone client may hit > GroupMismatchException from Ratis. In cases as such, client should exclude > the pipeline and retry write to a different block. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1312) Add more unit tests to verify BlockOutputStream functionalities
[ https://issues.apache.org/jira/browse/HDDS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805282#comment-16805282 ] Shashikant Banerjee commented on HDDS-1312: --- patch v3 addresses the unit test failure. > Add more unit tests to verify BlockOutputStream functionalities > --- > > Key: HDDS-1312 > URL: https://issues.apache.org/jira/browse/HDDS-1312 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Affects Versions: 0.4.0 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Blocker > Attachments: HDDS-1312.000.patch, HDDS-1312.001.patch, > HDDS-1312.002.patch, HDDS-1312.003.patch > > > This jira aims to add more unit test coverage for BlockOutputStream > functionalities. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations
[ https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-14316: --- Attachment: HDFS-14316-HDFS-13891.015.patch > RBF: Support unavailable subclusters for mount points with multiple > destinations > > > Key: HDFS-14316 > URL: https://issues.apache.org/jira/browse/HDFS-14316 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-14316-HDFS-13891.000.patch, > HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, > HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, > HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, > HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, > HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, > HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, > HDFS-14316-HDFS-13891.013.patch, HDFS-14316-HDFS-13891.014.patch, > HDFS-14316-HDFS-13891.015.patch > > > Currently mount points with multiple destinations (e.g., HASH_ALL) fail > writes when the destination subcluster is down. We need an option to allow > writing in other subclusters when one is down. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1358) Recon Server REST API not working as expected.
[ https://issues.apache.org/jira/browse/HDDS-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated HDDS-1358: Attachment: HDDS-1358-000.patch Status: Patch Available (was: Open) * Fixed the Guice Jersey-hk2 integration. * Added blocks to KeyMetadata * Minor fixes/improvements. > Recon Server REST API not working as expected. > -- > > Key: HDDS-1358 > URL: https://issues.apache.org/jira/browse/HDDS-1358 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 0.5.0 > > Attachments: HDDS-1358-000.patch > > > Guice Jetty integration that is being used for Recon Server API layer is not > working as expected. Fixing that in this JIRA. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1067) freon run on client gets hung when two of the datanodes are down in 3 datanode cluster
[ https://issues.apache.org/jira/browse/HDDS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805260#comment-16805260 ] Nilotpal Nandi commented on HDDS-1067: -- Thanks [~shashikant]. I have uploaded the patch > freon run on client gets hung when two of the datanodes are down in 3 > datanode cluster > -- > > Key: HDDS-1067 > URL: https://issues.apache.org/jira/browse/HDDS-1067 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Reporter: Nilotpal Nandi >Assignee: Nilotpal Nandi >Priority: Major > Attachments: HDDS-1067.001.patch, stack_file.txt > > > steps taken : > > # created 3 node docker cluster. > # wrote a key > # created partition such that 2 out of 3 datanodes cannot communicate with > any other node. > # Third datanode can communicate with scm, om and the client. > # ran freon to write key > Observation : > - > freon run is hung. There is no timeout. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1067) freon run on client gets hung when two of the datanodes are down in 3 datanode cluster
[ https://issues.apache.org/jira/browse/HDDS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nilotpal Nandi updated HDDS-1067: - Status: Patch Available (was: Open) > freon run on client gets hung when two of the datanodes are down in 3 > datanode cluster > -- > > Key: HDDS-1067 > URL: https://issues.apache.org/jira/browse/HDDS-1067 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Reporter: Nilotpal Nandi >Assignee: Nilotpal Nandi >Priority: Major > Attachments: HDDS-1067.001.patch, stack_file.txt > > > steps taken : > > # created 3 node docker cluster. > # wrote a key > # created partition such that 2 out of 3 datanodes cannot communicate with > any other node. > # Third datanode can communicate with scm, om and the client. > # ran freon to write key > Observation : > - > freon run is hung. There is no timeout. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1067) freon run on client gets hung when two of the datanodes are down in 3 datanode cluster
[ https://issues.apache.org/jira/browse/HDDS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nilotpal Nandi updated HDDS-1067: - Attachment: HDDS-1067.001.patch > freon run on client gets hung when two of the datanodes are down in 3 > datanode cluster > -- > > Key: HDDS-1067 > URL: https://issues.apache.org/jira/browse/HDDS-1067 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Reporter: Nilotpal Nandi >Assignee: Nilotpal Nandi >Priority: Major > Attachments: HDDS-1067.001.patch, stack_file.txt > > > steps taken : > > # created 3 node docker cluster. > # wrote a key > # created partition such that 2 out of 3 datanodes cannot communicate with > any other node. > # Third datanode can communicate with scm, om and the client. > # ran freon to write key > Observation : > - > freon run is hung. There is no timeout. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode
[ https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220641&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220641 ] ASF GitHub Bot logged work on HDDS-1255: Author: ASF GitHub Bot Created on: 29/Mar/19 17:54 Start Date: 29/Mar/19 17:54 Worklog Time Spent: 10m Work Description: elek commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-478092146 Thanks the update @ajayydv. +1 if jenkins is passed... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220641) Time Spent: 6h 20m (was: 6h 10m) > Refactor ozone acceptance test to allow run in secure mode > -- > > Key: HDDS-1255 > URL: https://issues.apache.org/jira/browse/HDDS-1255 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: pull-request-available > Time Spent: 6h 20m > Remaining Estimate: 0h > > Refactor ozone acceptance test to allow run in secure mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode
[ https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220640&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220640 ] ASF GitHub Bot logged work on HDDS-1255: Author: ASF GitHub Bot Created on: 29/Mar/19 17:53 Start Date: 29/Mar/19 17:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-478091889 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 24 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1099 | trunk passed | | +1 | compile | 64 | trunk passed | | +1 | mvnsite | 27 | trunk passed | | +1 | shadedclient | 733 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 21 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 18 | dist in the patch failed. | | +1 | compile | 18 | the patch passed | | +1 | javac | 18 | the patch passed | | +1 | mvnsite | 20 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | shelldocs | 14 | There were no new shelldocs issues. | | -1 | whitespace | 0 | The patch 3 line(s) with tabs. | | +1 | shadedclient | 799 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 17 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 21 | dist in the patch passed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 3043 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-632/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/632 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs | | uname | Linux 3ee5ce841a10 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 7dc0ecc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/9/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/9/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/9/testReport/ | | Max. process+thread count | 341 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/9/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220640) Time Spent: 6h 10m (was: 6h) > Refactor ozone acceptance test to allow run in secure mode > -- > > Key: HDDS-1255 > URL: https://issues.apache.org/jira/browse/HDDS-1255 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: pull-request-available > Time Spent: 6h 10m > Remaining Estimate: 0h > > Refactor ozone acceptance test to allow run in secure mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode
[ https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220637&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220637 ] ASF GitHub Bot logged work on HDDS-1255: Author: ASF GitHub Bot Created on: 29/Mar/19 17:53 Start Date: 29/Mar/19 17:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270516859 ## File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot ## @@ -13,9 +13,15 @@ # See the License for the specific language governing permissions and # limitations under the License. -*** Keywords *** +*** Settings *** +Library OperatingSystem +Library String +Library BuiltIn +*** Variables *** Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220637) Time Spent: 5h 40m (was: 5.5h) > Refactor ozone acceptance test to allow run in secure mode > -- > > Key: HDDS-1255 > URL: https://issues.apache.org/jira/browse/HDDS-1255 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: pull-request-available > Time Spent: 5h 40m > Remaining Estimate: 0h > > Refactor ozone acceptance test to allow run in secure mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode
[ https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220639&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220639 ] ASF GitHub Bot logged work on HDDS-1255: Author: ASF GitHub Bot Created on: 29/Mar/19 17:53 Start Date: 29/Mar/19 17:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270516876 ## File path: hadoop-ozone/dist/src/main/smoketest/test.sh ## @@ -69,6 +69,12 @@ execute_tests(){ echo " Output dir:$DIR/$RESULT_DIR" echo " Command to rerun: ./test.sh --keep --env $COMPOSE_DIR $TESTS" echo "-" + if [ ${COMPOSE_DIR} == "ozonesecure" ]; then + SECURITY_ENABLED="true" + else + SECURITY_ENABLED="false" + fi Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220639) Time Spent: 6h (was: 5h 50m) > Refactor ozone acceptance test to allow run in secure mode > -- > > Key: HDDS-1255 > URL: https://issues.apache.org/jira/browse/HDDS-1255 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: pull-request-available > Time Spent: 6h > Remaining Estimate: 0h > > Refactor ozone acceptance test to allow run in secure mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode
[ https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220638&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220638 ] ASF GitHub Bot logged work on HDDS-1255: Author: ASF GitHub Bot Created on: 29/Mar/19 17:53 Start Date: 29/Mar/19 17:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270516870 ## File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot ## @@ -0,0 +1,44 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Smoke test to start cluster with docker-compose environments. +Library OperatingSystem +Library String +Library BuiltIn +Resource../commonlib.robot +Resource../s3/commonawslib.robot + +*** Variables *** +${ENDPOINT_URL} http://s3g:9878 + +*** Keywords *** +Setup volume names +${random}Generate Random String 2 [NUMBERS] +Set Suite Variable ${volume1}fstest${random} +Set Suite Variable ${volume2}fstest2${random} + +*** Test Cases *** +Secure S3 test Success +Run Keyword Setup s3 tests +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} create-bucket --bucket bucket-test123 +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} list-buckets +Should contain ${output} bucket-test123 + +Secure S3 test Failure Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220638) Time Spent: 5h 50m (was: 5h 40m) > Refactor ozone acceptance test to allow run in secure mode > -- > > Key: HDDS-1255 > URL: https://issues.apache.org/jira/browse/HDDS-1255 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: pull-request-available > Time Spent: 5h 50m > Remaining Estimate: 0h > > Refactor ozone acceptance test to allow run in secure mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=220631&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220631 ] ASF GitHub Bot logged work on HDDS-1211: Author: ASF GitHub Bot Created on: 29/Mar/19 17:40 Start Date: 29/Mar/19 17:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run URL: https://github.com/apache/hadoop/pull/543#issuecomment-478087431 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 26 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 982 | trunk passed | | +1 | compile | 30 | trunk passed | | +1 | checkstyle | 22 | trunk passed | | +1 | mvnsite | 32 | trunk passed | | +1 | shadedclient | 711 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 0 | trunk passed | | +1 | javadoc | 19 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 31 | the patch passed | | +1 | compile | 25 | the patch passed | | +1 | javac | 25 | the patch passed | | -0 | checkstyle | 15 | hadoop-ozone/integration-test: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | mvnsite | 26 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 737 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 0 | the patch passed | | +1 | javadoc | 17 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 622 | integration-test in the patch failed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 3408 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-543/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/543 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1ac92dceb2d4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 7dc0ecc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/3/artifact/out/diff-checkstyle-hadoop-ozone_integration-test.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/3/artifact/out/patch-unit-hadoop-ozone_integration-test.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/3/testReport/ | | Max. process+thread count | 4412 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/3/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 220631) Time Spent: 1h (was: 50m) > Test SCMChillMode failing randomly in Jenkins run > - > > Key: HDDS-1211 > URL: https://issues.apache.org/jira/browse/HDDS-1211 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available, pushed-to-craterlake > Time Spent: 1h > Remaining Estimate: 0h > > java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks
[jira] [Commented] (HDFS-14397) Backport HADOOP-15684 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805244#comment-16805244 ] Chao Sun commented on HDFS-14397: - Test fails because this needs HDFS-10536. Filed HDFS-14399 and will upload a new patch after that is resolved. > Backport HADOOP-15684 to branch-2 > - > > Key: HDFS-14397 > URL: https://issues.apache.org/jira/browse/HDFS-14397 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Minor > Attachments: HDFS-14397-branch-2.000.patch > > > As multi-SBN feature is already backported to branch-2, this is a follow-up > to backport HADOOP-15684. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=220630&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220630 ] ASF GitHub Bot logged work on HDDS-1211: Author: ASF GitHub Bot Created on: 29/Mar/19 17:38 Start Date: 29/Mar/19 17:38 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run URL: https://github.com/apache/hadoop/pull/543#issuecomment-478086839 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 25 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 986 | trunk passed | | -1 | compile | 30 | integration-test in trunk failed. | | +1 | checkstyle | 21 | trunk passed | | -1 | mvnsite | 31 | integration-test in trunk failed. | | +1 | shadedclient | 726 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 0 | trunk passed | | +1 | javadoc | 19 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 25 | integration-test in the patch failed. | | -1 | compile | 24 | integration-test in the patch failed. | | -1 | javac | 24 | integration-test in the patch failed. | | -0 | checkstyle | 16 | hadoop-ozone/integration-test: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | -1 | mvnsite | 26 | integration-test in the patch failed. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 732 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 0 | the patch passed | | +1 | javadoc | 18 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 27 | integration-test in the patch failed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 2819 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/543 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6b9cbf5f585c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 7dc0ecc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/branch-compile-hadoop-ozone_integration-test.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/patch-compile-hadoop-ozone_integration-test.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/patch-compile-hadoop-ozone_integration-test.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/diff-checkstyle-hadoop-ozone_integration-test.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/artifact/out/patch-unit-hadoop-ozone_integration-test.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-543/4/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go
[jira] [Updated] (HDFS-14399) Backport HDFS-10536 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14399: Status: Patch Available (was: Open) > Backport HDFS-10536 to branch-2 > --- > > Key: HDFS-14399 > URL: https://issues.apache.org/jira/browse/HDFS-14399 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14399-branch-2.000.patch > > > As multi-SBN feature is already backported to branch-2, this is a follow-up > to backport HDFS-10536. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14399) Backport HDFS-10536 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14399: Description: As multi-SBN feature is already backported to branch-2, this is a follow-up to backport HDFS-10536. was: As multi-SBN feature is already backported to branch-2, this is a follow-up to backport HADOOP-10536. > Backport HDFS-10536 to branch-2 > --- > > Key: HDFS-14399 > URL: https://issues.apache.org/jira/browse/HDFS-14399 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14399-branch-2.000.patch > > > As multi-SBN feature is already backported to branch-2, this is a follow-up > to backport HDFS-10536. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14399) Backport HDFS-10536 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-14399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-14399: Attachment: HDFS-14399-branch-2.000.patch > Backport HDFS-10536 to branch-2 > --- > > Key: HDFS-14399 > URL: https://issues.apache.org/jira/browse/HDFS-14399 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Critical > Attachments: HDFS-14399-branch-2.000.patch > > > As multi-SBN feature is already backported to branch-2, this is a follow-up > to backport HDFS-10536. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14399) Backport HDFS-10536 to branch-2
Chao Sun created HDFS-14399: --- Summary: Backport HDFS-10536 to branch-2 Key: HDFS-14399 URL: https://issues.apache.org/jira/browse/HDFS-14399 Project: Hadoop HDFS Issue Type: Bug Reporter: Chao Sun Assignee: Chao Sun As multi-SBN feature is already backported to branch-2, this is a follow-up to backport HADOOP-10536. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805226#comment-16805226 ] Bharat Viswanadham edited comment on HDDS-1300 at 3/29/19 5:31 PM: --- Hi [~ljain] But createFile internally calls openKey, and that internally calls allocateBlock. So, this createFile should happen only leader. {quote}Thanks for reviewing the patch! v8 patch removes the allocateBlock call in createFile function. The allocateBlock call can be added in a followup jira. {quote} Not sure what you mean here? (Because, internally openKey calls allocateBlock) And I see this Jira got committed. So we want to open a new Jira and address this? Or now when createFile is called, we pass length zero, so because of this, we don't call allocateBlock? was (Author: bharatviswa): Hi [~ljain] But createFile internally calls openKey, and that internally calls allocateBlock. So, this createFile should happen only leader. {quote}Thanks for reviewing the patch! v8 patch removes the allocateBlock call in createFile function. The allocateBlock call can be added in a followup jira. {quote} Not sure what you mean here? (Because, internally openKey calls allocateBlock) And I see this Jira got committed. So we want to open a new Jira and address this? > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis
[ https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16805226#comment-16805226 ] Bharat Viswanadham commented on HDDS-1300: -- Hi [~ljain] But createFile internally calls openKey, and that internally calls allocateBlock. So, this createFile should happen only leader. {quote}Thanks for reviewing the patch! v8 patch removes the allocateBlock call in createFile function. The allocateBlock call can be added in a followup jira. {quote} Not sure what you mean here? (Because, internally openKey calls allocateBlock) And I see this Jira got committed. So we want to open a new Jira and address this? > Optimize non-recursive ozone filesystem apis > > > Key: HDDS-1300 > URL: https://issues.apache.org/jira/browse/HDDS-1300 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Filesystem, Ozone Manager >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, > HDDS-1300.003.patch, HDDS-1300.004.patch, HDDS-1300.005.patch, > HDDS-1300.006.patch, HDDS-1300.007.patch, HDDS-1300.008.patch > > > This Jira aims to optimise non recursive apis in ozone file system. The Jira > would add support for such apis in Ozone manager in order to reduce the > number of rpc calls to Ozone Manager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org