Issues related to usage of the Welford method in org.apache.hadoop.metrics2.util.SampleStat
Hi Team, In SampleStat, I found the Welford method is used to calculate the variance in the following method. public SampleStat add(long nSamples, double x) I don't understand the meaning of the parameter *x*. Does it mean the sum of the n samples. If so, I think the new mean calculated by a1 = a0 + (x - a0) / numSamples is not correct. It should be a1 = a0 + (x - nSamples * a0) / numSamples . However, there's no way to calculate the new variance. Could you help me to understand the equations? Thanks very much in advance. Best regards, Yanghong Zhong yangzh...@ebay.com
[jira] [Resolved] (HADOOP-13424) namenode connect time out in cluster with 65 machiones
[ https://issues.apache.org/jira/browse/HADOOP-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal resolved HADOOP-13424. Resolution: Invalid [~wanglaichao] Jira is not a support channel. Please use u...@hadoop.apache.org. > namenode connect time out in cluster with 65 machiones > -- > > Key: HADOOP-13424 > URL: https://issues.apache.org/jira/browse/HADOOP-13424 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.4.1 > Environment: hadoop 2.4.1 >Reporter: wanglaichao > > Befor out cluster has 50 nodes ,it runs ok. Recently we add 15 node ,it > always reports errors with connectint timeout.Who can help me ,thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Closed] (HADOOP-13424) namenode connect time out in cluster with 65 machiones
[ https://issues.apache.org/jira/browse/HADOOP-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal closed HADOOP-13424. -- > namenode connect time out in cluster with 65 machiones > -- > > Key: HADOOP-13424 > URL: https://issues.apache.org/jira/browse/HADOOP-13424 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.4.1 > Environment: hadoop 2.4.1 >Reporter: wanglaichao > > Befor out cluster has 50 nodes ,it runs ok. Recently we add 15 node ,it > always reports errors with connectint timeout.Who can help me ,thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13424) namenode connect time out in cluster with 65 machiones
wanglaichao created HADOOP-13424: Summary: namenode connect time out in cluster with 65 machiones Key: HADOOP-13424 URL: https://issues.apache.org/jira/browse/HADOOP-13424 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.4.1 Environment: hadoop 2.4.1 Reporter: wanglaichao Befor out cluster has 50 nodes ,it runs ok. Recently we add 15 node ,it always reports errors with connectint timeout.Who can help me ,thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393103#comment-15393103 ] Hadoop QA commented on HADOOP-13032: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 7s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 35s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 33s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 26s{color} | {color:red} hadoop-mapreduce-client-core in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 26s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} hadoop-openstack in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 16s{color} | {color:green} root generated 0 new + 701 unchanged - 8 fixed = 701 total (was 709) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 49s{color} | {color:orange} root: The patch generated 48 new + 1389 unchanged - 40 fixed = 1437 total (was 1429) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 31s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 23s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token
[ https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393018#comment-15393018 ] Hadoop QA commented on HADOOP-13381: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12820067/HADOOP-13381.03.patch | | JIRA Issue | HADOOP-13381 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5427676dfd36 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d383bfd | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10081/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10081/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > KMS clients running in the same JVM should use updated KMS Delegation Token > --- > >
[jira] [Commented] (HADOOP-13369) [umbrella] Fix javadoc warnings by JDK8 on trunk
[ https://issues.apache.org/jira/browse/HADOOP-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392998#comment-15392998 ] Tsuyoshi Ozawa commented on HADOOP-13369: - cool! Thanks for your help. > [umbrella] Fix javadoc warnings by JDK8 on trunk > > > Key: HADOOP-13369 > URL: https://issues.apache.org/jira/browse/HADOOP-13369 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Tsuyoshi Ozawa > Labels: newbie > > After migrating JDK8, lots warnings show up. We should fix them overall. > {quote} > [WARNING] ^[WARNING] > /home/ubuntu/hadoopdev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java:53: > warning: no description for @throws > ... > [WARNING] > /home/ubuntu/hadoopdev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java:53: > warning: no @param for options > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token
[ https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392948#comment-15392948 ] Xiao Chen commented on HADOOP-13381: Patch 3 to make findbugs happy. [~asuresh], Could you please take a look at this when you have a chance? I feel this is the better way to fix the issue: if there's a DT present, then underlying user doesn't matter, and we doAs the DT's UGI to use it. Otherwise, we keep existing behavior. Thanks in advance. > KMS clients running in the same JVM should use updated KMS Delegation Token > --- > > Key: HADOOP-13381 > URL: https://issues.apache.org/jira/browse/HADOOP-13381 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HADOOP-13381.01.patch, HADOOP-13381.02.patch, > HADOOP-13381.03.patch > > > When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation > failure after the very first KMS token is expired. The MR job itself runs > fine though. > When this happens, YARN NodeManager's log will show > {{AuthenticationException}} with {{token is expired}} / {{token can't be > found in cache}}, depending on whether the expired token is removed by the > background or not. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token
[ https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13381: --- Attachment: HADOOP-13381.03.patch > KMS clients running in the same JVM should use updated KMS Delegation Token > --- > > Key: HADOOP-13381 > URL: https://issues.apache.org/jira/browse/HADOOP-13381 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HADOOP-13381.01.patch, HADOOP-13381.02.patch, > HADOOP-13381.03.patch > > > When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation > failure after the very first KMS token is expired. The MR job itself runs > fine though. > When this happens, YARN NodeManager's log will show > {{AuthenticationException}} with {{token is expired}} / {{token can't be > found in cache}}, depending on whether the expired token is removed by the > background or not. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects
[ https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392937#comment-15392937 ] Gour Saha commented on HADOOP-13382: [~mattf] this change in branch-2 breaks Apache Slider project which depends on hadoop-common. It is an incompatible change. > remove unneeded commons-httpclient dependencies from POM files in Hadoop and > sub-projects > - > > Key: HADOOP-13382 > URL: https://issues.apache.org/jira/browse/HADOOP-13382 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: Matt Foley >Assignee: Matt Foley > Attachments: HADOOP-13382-branch-2.000.patch, > HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch > > > In branch-2.8 and later, the patches for various child and related bugs > listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, > HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of > "commons-httpclient" from Hadoop and its sub-projects (except for > hadoop-tools/hadoop-openstack; see HADOOP-11614). > However, after incorporating these patches, "commons-httpclient" is still > listed as a dependency in these POM files: > * hadoop-project/pom.xml > * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml > We wish to remove these, but since commons-httpclient is still used in many > files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to > * hadoop-tools/hadoop-openstack/pom.xml > (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is > removed from hadoop-openstack.) > In 2.8, this was mostly done by HADOOP-12552, but the version info formerly > inherited from hadoop-project/pom.xml also needs to be added, so that is in > the branch-2.8 version of the patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13423) Run JDiff on trunk for Hadoop-Common and analyze results
[ https://issues.apache.org/jira/browse/HADOOP-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HADOOP-13423: Description: We need to run JDiff and make sure the first 3.0.0 alpha release doesn't include unnecessary API incompatible change. > Run JDiff on trunk for Hadoop-Common and analyze results > > > Key: HADOOP-13423 > URL: https://issues.apache.org/jira/browse/HADOOP-13423 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Blocker > > We need to run JDiff and make sure the first 3.0.0 alpha release doesn't > include unnecessary API incompatible change. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13423) Run JDiff on trunk for Hadoop-Common and analyze results
Wangda Tan created HADOOP-13423: --- Summary: Run JDiff on trunk for Hadoop-Common and analyze results Key: HADOOP-13423 URL: https://issues.apache.org/jira/browse/HADOOP-13423 Project: Hadoop Common Issue Type: Bug Reporter: Wangda Tan Assignee: Wangda Tan Priority: Blocker -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392892#comment-15392892 ] Chris Nauroth commented on HADOOP-13345: [~ajfabbri] and [~eddyxu], thank you for sharing your work. I see a lot of commonality between the 2 efforts so far. I also explored something like the Layered FileSystem vs. Pluggable Metadata trade-off that you described. Specifically, I had an earlier prototype (not attached) that was a {{FilterFileSystem}}, with the intent that you could layer it over any other {{FileSystem}} implementation. I abandoned this idea when I got into implementation and found that I was going to need to coordinate more directly with the S3A logic in a way that wasn't amenable to overriding {{FileSystem}} methods. For example, I needed special case logic around {{createFakeDirectoryIfNecessary}}. It looks like you came to the same conclusion in your patch. The main difference I see is that my work focused more on consistency, with the S3 bucket still treated as source of truth, and your work focused more on performance. I hadn't tried anything with the DynamoDB lookup completely short-circuiting the S3 lookup. I think we can reconcile this though. Like you said, we can support configurable policies for different use cases. For example, if a user is willing to commit to performing all access through S3A and no external tools, then I expect it's safe for them to turn on a more aggressive caching policy that satisfies all metadata lookups from DynamoDB. Alternatively, there can be a fix-up tool like you described. This might fold into HADOOP-13311, where I proposed a new shell entry point for S3A-specific administration commands. Another interesting example in this area is GCS, which has something like the policies we are describing in terms of their {{DirectoryListCache}}. This includes an implementation like the in-memory one included in your patch. https://github.com/GoogleCloudPlatform/bigdata-interop/blob/1447da82f2bded2ac8493b07797a5c2483b70497/gcsio/src/main/java/com/google/cloud/hadoop/gcsio/InMemoryDirectoryListCache.java The JavaDocs advertise it as providing consistency within a single process. Like you said, there is no cache coherence across processes. [HADOOP-12876|https://issues.apache.org/jira/browse/HADOOP-12876] is slightly related. Azure Data Lake has implemented an in-memory {{FileStatus}} cache (patch not yet available). When this idea was suggested, I raised the concern about cache coherence, but system testing with that caching enabled has gone well. That's a good sign that the cache coherence problem might not cause much harm to applications in practice. I had been thinking the HADOOP-12876 work could eventually be refactored to hadoop-common for any {{FileSystem}} to use, effectively becoming something like the "dentry cache" of Hadoop. I had been thinking this could happen independent of S3Guard. We can explore further if that makes sense, or if it's really beneficial to push the caching lower into S3A itself. (Some of the internal S3 listing calls don't map exactly to {{FileSystem}} method calls.) To summarize though, I see more commonality than difference, so I'd like to proceed with collaborating on this. I'd start by creating a feature branch and folding all of the information into a shared design doc. Please let me know your thoughts. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392860#comment-15392860 ] Mingliang Liu edited comment on HADOOP-13032 at 7/25/16 11:43 PM: -- The v0 patch is the first effort of refactoring the {{FileSystem$Statistics}} class to use the newly added {{StorageStatistics}}. Specially, it: * Defines a new class {{ThreadLocalFsStorageStatistics}} as the thread-local implementation to aggregate stats. This will be also used in [HADOOP-13031], which is to separate the distance-specific rack-aware read bytes logic from {{FileSystemStorageStatistics}} to new {{DFSRackAwareStorageStatistics}} as it's DFS-specific. * Makes {{FileSystemStorageStatistics}} use the new {{ThreadLocalFsStorageStatistics}} class instead of delegating all operations to {{FileSystem$Statistics}} * Removes the deprecated {{FileSystem$Statistics}} class * Updates all the usages of the {{FileSystem$Statistics}} to use {{FileSystemStorageStatistics}} TODO: # This patch is large. We should split the patch into small ones. I plan to firstly separate the first two items above out of a new JIRA, along with new unit tests # The MapReduce part that explores the {{FileSystemStorageStatistics}} should also consider the rack-aware read bytes. This should be borrowing code from [MAPREDUCE-6660] was (Author: liuml07): The v0 patch is the first effort of refactoring the {{FileSystem$Statistics}} class to use the newly added {{StorageStatistics}}. Specially, it: * Defines a new class {{ThreadLocalFsStorageStatistics}} as the thread-local implementation to aggregate stats. This will be used in [HADOOP-13031] * Makes {{FileSystemStorageStatistics}} use the new {{ThreadLocalFsStorageStatistics}} class instead of delegating all operations to {{FileSystem$Statistics}} * Removes the deprecated {{FileSystem$Statistics}} class * Updates all the usages of the {{FileSystem$Statistics}} to use {{FileSystemStorageStatistics}} TODO: # This patch is large. We should split the patch into small ones. I plan to firstly separate the first two items above out of a new JIRA, along with new unit tests # The MapReduce part that explores the {{FileSystemStorageStatistics}} should also consider the rack-aware read bytes. This should be borrowing code from [MAPREDUCE-6660] > Refactor FileSystem$Statistics to use StorageStatistics > --- > > Key: HADOOP-13032 > URL: https://issues.apache.org/jira/browse/HADOOP-13032 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13032.000.patch > > > [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. > This jira is to track the effort of moving the {{Statistics}} class out of > {{FileSystem}}, and make it use that new interface. > We should keep the thread local implementation. Benefits are: > # they could be used in both {{FileContext}} and {{FileSystem}} > # unified stats data structure > # shorter source code > Please note this will be an backwards-incompatible change. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13032: --- Affects Version/s: (was: 3.0.0-alpha1) Target Version/s: 3.0.0-alpha2 Status: Patch Available (was: Open) The v0 patch is the first effort of refactoring the {{FileSystem$Statistics}} class to use the newly added {{StorageStatistics}}. Specially, it: * Defines a new class {{ThreadLocalFsStorageStatistics}} as the thread-local implementation to aggregate stats. This will be used in [HADOOP-13031] * Makes {{FileSystemStorageStatistics}} use the new {{ThreadLocalFsStorageStatistics}} class instead of delegating all operations to {{FileSystem$Statistics}} * Removes the deprecated {{FileSystem$Statistics}} class * Updates all the usages of the {{FileSystem$Statistics}} to use {{FileSystemStorageStatistics}} TODO: # This patch is large. We should split the patch into small ones. I plan to firstly separate the first two items above out of a new JIRA, along with new unit tests # The MapReduce part that explores the {{FileSystemStorageStatistics}} should also consider the rack-aware read bytes. This should be borrowing code from [MAPREDUCE-6660] > Refactor FileSystem$Statistics to use StorageStatistics > --- > > Key: HADOOP-13032 > URL: https://issues.apache.org/jira/browse/HADOOP-13032 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13032.000.patch > > > [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. > This jira is to track the effort of moving the {{Statistics}} class out of > {{FileSystem}}, and make it use that new interface. > We should keep the thread local implementation. Benefits are: > # they could be used in both {{FileContext}} and {{FileSystem}} > # unified stats data structure > # shorter source code > Please note this will be an backwards-incompatible change. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392852#comment-15392852 ] Hadoop QA commented on HADOOP-13422: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 4 new + 10 unchanged - 0 fixed = 14 total (was 10) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 3 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12820041/HADOOP-13422.patch | | JIRA Issue | HADOOP-13422 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 208b135301e2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 703fdf8 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10079/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/10079/artifact/patchprocess/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10079/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10079/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process >
[jira] [Commented] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files
[ https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392839#comment-15392839 ] Masatake Iwasaki commented on HADOOP-11601: --- I ran {{TestHDFSContractCreate}}, {{TestS3NContractCreate}} and {{TestS3AContractCreate}} with the patch and succeeded. > Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty > files > --- > > Key: HADOOP-11601 > URL: https://issues.apache.org/jira/browse/HADOOP-11601 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, test >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch, > HADOOP-11601-003.patch, HADOOP-11601-004.patch, HADOOP-11601-005.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > HADOOP-11584 has shown that the contract tests are not validating that > {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition > workload correctly. > Clarify in text and add test to do this. Test MUST be designed to work > against eventually consistent filesystems where {{getFileStatus()}} may not > be immediately visible, by retrying operation if FS declares it is an object > store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392830#comment-15392830 ] Arun Suresh commented on HADOOP-13422: -- Thanks for the patch [~sershe].. it looks straighforward and I understand unit testing this is non-trivial. Can you also modify {{ZKSignerSecretProvider}} to include this fix as well ? since ZKDTSM and the signerSecretProvider are generally used in conjunction. > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13422.patch > > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13032: --- Attachment: HADOOP-13032.000.patch > Refactor FileSystem$Statistics to use StorageStatistics > --- > > Key: HADOOP-13032 > URL: https://issues.apache.org/jira/browse/HADOOP-13032 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13032.000.patch > > > [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. > This jira is to track the effort of moving the {{Statistics}} class out of > {{FileSystem}}, and make it use that new interface. > We should keep the thread local implementation. Benefits are: > # they could be used in both {{FileContext}} and {{FileSystem}} > # unified stats data structure > # shorter source code > Please note this will be an backwards-incompatible change. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392787#comment-15392787 ] Arun Suresh commented on HADOOP-13422: -- Sure, will take a look later today... > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13422.patch > > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13422: --- Target Version/s: 2.8.0 > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13422.patch > > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392771#comment-15392771 ] Chris Nauroth commented on HADOOP-13422: {code} private final javax.security.auth.login.Configuration baseConfig = javax.security.auth.login.Configuration .getConfiguration(); {code} I expect pre-commit will flag a nitpick about indentation and line length exceeding 80 characters here, so we'll need one more patch revision. I'm in favor of the approach though. This will help avoid some bugs until we can implement a long-term fix that makes use of ZOOKEEPER-2139. There is already similar working code in Hive. (See the {{LlapZookeeperRegistryImpl}} class.) I know Sergey was able to demonstrate that this fix works through manual testing. [~asuresh], are you interested in reviewing this? I'll give it some time before I consider committing. > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13422.patch > > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13422: -- Attachment: HADOOP-13422.patch The initial patch. > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13422.patch > > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13422: -- Status: Patch Available (was: Open) > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13422.patch > > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
[ https://issues.apache.org/jira/browse/HADOOP-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13422: -- Description: There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are not available yet in a stable ZK version and there's no timeline for availability, so for now it would help to make SM aware of other users of the global config. (was: There's a race where old config ) > ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK > users in process > --- > > Key: HADOOP-13422 > URL: https://issues.apache.org/jira/browse/HADOOP-13422 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > There's a race in the globals. The non-global APIs from ZOOKEEPER-2139 are > not available yet in a stable ZK version and there's no timeline for > availability, so for now it would help to make SM aware of other users of the > global config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392755#comment-15392755 ] Chris Nauroth commented on HADOOP-13081: Hello [~sershe]. Some feedback on the current patch: # On 6/11, I commented asking if it's feasible to add a unit test to {{TestUserGroupInformation}}. What are your thoughts? # FindBugs flagged that it's a method named {{clone}}, but it doesn't implement {{Cloneable}} and doesn't follow the traditional recipe for a {{clone}} method (e.g. calling the superclass). Maybe rename the method to something like {{copySubject}}? # Could you add to the JavaDocs describing the intent of this method? You could describe how it supports adding different credentials to different UGI instances without forcing them all to re-authenticate through Kerberos. # Checkstyle flagged that there were a few lines longer than 80 characters. > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13422) ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process
Sergey Shelukhin created HADOOP-13422: - Summary: ZKDelegationTokenSecretManager JaasConfig does not work well with other ZK users in process Key: HADOOP-13422 URL: https://issues.apache.org/jira/browse/HADOOP-13422 Project: Hadoop Common Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin There's a race where old config -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-12020) Support AWS S3 reduced redundancy storage class
[ https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154736#comment-15154736 ] Steven K. Wong edited comment on HADOOP-12020 at 7/25/16 9:54 PM: -- While adding support for storage class, the AWS SDK version should also be bumped up in order to support the latest storage class, Standard Infrequent Access, which requires SDK 1.10.19 or later. was (Author: slider): While adding support for storage class, the AWS SDK version should also be bumped up in order to support the latest storage class, Standard Infrequent Access. > Support AWS S3 reduced redundancy storage class > --- > > Key: HADOOP-12020 > URL: https://issues.apache.org/jira/browse/HADOOP-12020 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.7.0 > Environment: Hadoop on AWS >Reporter: Yann Landrin-Schweitzer > > Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects. > This offers, according to Amazon's material, 99.% reliability. > For many applications, however, the 99.99% reliability offered by the > REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a > significant cost saving. > HDFS, when using the legacy s3n protocol, or the new s3a scheme, should > support overriding the default storage class of created s3 objects so that > users can take advantage of this cost benefit. > This would require minor changes of the s3n and s3a drivers, using > a configuration property fs.s3n.storage.class to override the default storage > when desirable. > This override could be implemented in Jets3tNativeFileSystemStore with: > S3Object object = new S3Object(key); > ... > if(storageClass!=null) object.setStorageClass(storageClass); > It would take a more complex form in s3a, e.g. setting: > InitiateMultipartUploadRequest initiateMPURequest = > new InitiateMultipartUploadRequest(bucket, key, om); > if(storageClass !=null ) { > initiateMPURequest = > initiateMPURequest.withStorageClass(storageClass); > } > and similar statements in various places. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A
Steven K. Wong created HADOOP-13421: --- Summary: Switch to v2 of the S3 List Objects API in S3A Key: HADOOP-13421 URL: https://issues.apache.org/jira/browse/HADOOP-13421 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 2.8.0 Reporter: Steven K. Wong Priority: Minor Unlike [version 1|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html] of the S3 List Objects API, [version 2|http://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html] by default does not fetch object owner information, which S3A doesn't need anyway. By switching to v2, there will be less data to transfer/process. Methods in S3AFileSystem that use this API include: * getFileStatus(Path) * innerDelete(Path, boolean) * innerListStatus(Path) * innerRename(Path, Path) Requires AWS SDK 1.10.75 or later. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392570#comment-15392570 ] Sergey Shelukhin commented on HADOOP-13081: --- ping? We are doing this via reflection in Hive now, in certain scenarios, and it appears to work as intended. > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories
[ https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392535#comment-15392535 ] Abdullah Yousufi commented on HADOOP-13208: --- Hey [~ste...@apache.org], is there a reason this change only applies to listFiles() and not listStatus()? > S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the > pseudo-tree of directories > > > Key: HADOOP-13208 > URL: https://issues.apache.org/jira/browse/HADOOP-13208 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13208-branch-2-001.patch, > HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, > HADOOP-13208-branch-2-009.patch, HADOOP-13208-branch-2-010.patch, > HADOOP-13208-branch-2-011.patch, HADOOP-13208-branch-2-012.patch, > HADOOP-13208-branch-2-017.patch, HADOOP-13208-branch-2-018.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > A major cost in split calculation against object stores turns out be listing > the directory tree itself. That's because against S3, it takes S3A two HEADs > and two lists to list the content of any directory path (2 HEADs + 1 list for > getFileStatus(); the next list to query the contents). > Listing a directory could be improved slightly by combining the final two > listings. However, a listing of a directory tree will still be > O(directories). In contrast, a recursive {{listFiles()}} operation should be > implementable by a bulk listing of all descendant paths; one List operation > per thousand descendants. > As the result of this call is an iterator, the ongoing listing can be > implemented within the iterator itself -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13420) Hadoop standalone instance exits during starting an MR job with ExpiredTokenRemover error in log (after some time & after few jobs executed successfully)
Kiran Miryala created HADOOP-13420: -- Summary: Hadoop standalone instance exits during starting an MR job with ExpiredTokenRemover error in log (after some time & after few jobs executed successfully) Key: HADOOP-13420 URL: https://issues.apache.org/jira/browse/HADOOP-13420 Project: Hadoop Common Issue Type: Bug Environment: Ubuntu Desktop 16 LTE, jdk1.8.92 & Hadoop 2.7.2 Reporter: Kiran Miryala Hadoop/HDFS & Yarn processes exit (all jps deamons) and user is thrown out from session when it is running an MR job, after some interval (i.e. after few jobs completed successfully). Error in log file: 2016-07-23 17:56:16,258 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted Log file: /usr/local/hadoop/logs/yarn-hduser-resourcemanager-KMUbLptp.log 2016-07-23 17:56:14,044 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1469316920580_0007_01_02 Container Transitioned from ACQUIRED to RUNNING 2016-07-23 17:56:14,663 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate of application :application_1469316920580_0007 2016-07-23 17:56:16,201 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: RECEIVED SIGNAL 15: SIGTERM 2016-07-23 17:56:16,258 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 2016-07-23 17:56:16,259 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8088 2016-07-23 17:56:16,284 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: RECEIVED SIGNAL 15: SIGTERM 2016-07-23 17:56:16,360 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032 2016-07-23 17:56:16,361 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8032 2016-07-23 17:56:16,361 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2016-07-23 17:56:16,362 INFO org.apache.hadoop.ipc.Server: Stopping server on 8033 This error is happens only after below terminal output entry : 16/07/23 17:56:13 INFO mapreduce.Job: map 0% reduce 0% Environment: Ubuntu Desktop 16 LTE, jdk1.8.92 & Hadoop 2.7.2 Steps to reproduce: (1)login (on terminal) using dedicated user hduser (a sudo user) using command: su hduser (2)start hadoop daemons using commands: start-dfs.sh & start-yarn.sh (3)I can see all processes (4)Few MR jobs completed successfully. Can try submit same job after about 10-15 min. (5)This user is thrown out & land in regular desktop user session. I think it could be some timeout, it works normally again if I restart my machine & start over again. I get same error if I follow the same steps on the same terminal session. I would appreciate if somebody has encountered this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392375#comment-15392375 ] Lei (Eddy) Xu commented on HADOOP-12928: [~ozawa] Thanks for helping with this. Let me know if you need anything. > Update netty to 3.10.5.Final to sync with zookeeper > --- > > Key: HADOOP-12928 > URL: https://issues.apache.org/jira/browse/HADOOP-12928 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan >Assignee: Lei (Eddy) Xu > Attachments: HADOOP-12928-branch-2.00.patch, HADOOP-12928.01.patch, > HADOOP-12928.02.patch, HDFS-12928.00.patch > > > Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper > 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927 > Pull request: https://github.com/apache/hadoop/pull/85 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
[ https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392309#comment-15392309 ] Steve Loughran commented on HADOOP-13164: - You need a story at error handling here. Bear in mind that removeKey(nonexistentpath) will fail at the AWS SDK layer; your code will need to handle that. The code you cut did that by swallowing the exceptions. I'd expect that to continue. I've been thinking we can more than just delete with an async thread; you can do parent dir creation on a delete operation, validation in a create() call that there is no parent directory that is actually a file (this could be launched in the create(), the result awaited on/checked in the close()/first PUT. That argues for having an executor that takes a queue of actions pushed down, of which dir deletion is only one. We'd need queue length another metric; actually a count of #of fake directory delete calls made and actual deletes executed. That'd be something that the tests can use. I'd like to see a way to test this. Especially the shutdown process. I'm also wondering whether we can create new sequences of operations which could lose data. Something like {code} touch("/path/1/2/3/4/5/6") delete("/path/1/2/3/4/5/6") echo("/path", 'important text") {code} If that recursive delete hasn't completed before that echo operation happens, data gets lost. Thinking about this some more, I really worry about the async behaviour. Maybe we should try to optimise the sync one as a single removeKeys on all the parents, Again, we could do a scale test to play with the options here, to measure what made the lowest #of calls, and the time it took > Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories > > > Key: HADOOP-13164 > URL: https://issues.apache.org/jira/browse/HADOOP-13164 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13164.branch-2.WIP.patch > > > https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224 > deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename > and on outputstream close() to purge any fake directories. Depending on the > nesting in the folder structure, it might take a lot longer time as it > invokes getFileStatus multiple times. Instead, it should be able to break > out of the loop once a non-empty directory is encountered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism
[ https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392307#comment-15392307 ] Hadoop QA commented on HADOOP-13252: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 27s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 24s{color} | {color:orange} root: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch 2 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 28s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | |
[jira] [Commented] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash
[ https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392261#comment-15392261 ] Steve Loughran commented on HADOOP-10511: - marking as a duplicate of HADOOP-3733; the fix that went in there is common code, as are the tests and the effort done to try and reduce the printing of secrets. > s3n:// incorrectly handles URLs with secret keys that contain a slash > - > > Key: HADOOP-10511 > URL: https://issues.apache.org/jira/browse/HADOOP-10511 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.6.0 >Reporter: Daniel Darabos > Labels: BB2015-05-TBR > Attachments: HADOOP-10511-002.patch, HADOOP-10511.patch > > > This is similar to HADOOP-3733, but happens on s3n:// instead of s3://. > Essentially if I have a path like "s3n://key:pass%2fw...@example.com/test", > it will under certain circumstances be replaced with "s3n://key:pass/test" > which then causes "Invalid hostname in URI" exceptions. > I have a unit test and a fix for this. I'll make a pull request in a moment. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10511) s3n:// incorrectly handles URLs with secret keys that contain a slash
[ https://issues.apache.org/jira/browse/HADOOP-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-10511: Resolution: Duplicate Status: Resolved (was: Patch Available) > s3n:// incorrectly handles URLs with secret keys that contain a slash > - > > Key: HADOOP-10511 > URL: https://issues.apache.org/jira/browse/HADOOP-10511 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.6.0 >Reporter: Daniel Darabos > Labels: BB2015-05-TBR > Attachments: HADOOP-10511-002.patch, HADOOP-10511.patch > > > This is similar to HADOOP-3733, but happens on s3n:// instead of s3://. > Essentially if I have a path like "s3n://key:pass%2fw...@example.com/test", > it will under certain circumstances be replaced with "s3n://key:pass/test" > which then causes "Invalid hostname in URI" exceptions. > I have a unit test and a fix for this. I'll make a pull request in a moment. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13221) s3a create() doesn't check for a parent path being a file
[ https://issues.apache.org/jira/browse/HADOOP-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392252#comment-15392252 ] Steve Loughran commented on HADOOP-13221: - This has the potential to be very expensive. it could perhaps be done asynchronously, with the create() call starting a check which surfaces as a failure in a subsequent write() operation. Even there, given that s3a doesn't write files until close(), there's a race condition. A create() check may pass, but if a file is later created further up the directory tree, the final close() would still be created. > s3a create() doesn't check for a parent path being a file > - > > Key: HADOOP-13221 > URL: https://issues.apache.org/jira/browse/HADOOP-13221 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Assignee: Rajesh Balamohan > > Seen in a code review. Notable that if true, this got by all the FS contract > tests —showing we missed a couple. > {{S3AFilesystem.create()}} does not examine its parent paths to verify that > there does not exist one which is a file. It looks for the destination path > if overwrite=false (see HADOOP-13188 for issues there), but it doesn't check > the parent for not being a file, or the parent of that path. > It must go up the tree, verifying that either a path does not exist, or that > the path is a directory. The scan can stop at the first entry which is is a > directory, thus the operation is O(empty-directories) and not O(directories). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API
[ https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392218#comment-15392218 ] Hadoop QA commented on HADOOP-12667: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 58s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819973/HADOOP-12667-branch-2-002.patch | | JIRA Issue | HADOOP-12667 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 12afc40c18a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / b757eff | | Default Java | 1.7.0_101 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 | | findbugs | v3.0.0 | | JDK v1.7.0_101 Test Results |
[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API
[ https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12667: Status: Patch Available (was: Open) > s3a: Support createNonRecursive API > --- > > Key: HADOOP-12667 > URL: https://issues.apache.org/jira/browse/HADOOP-12667 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-12667-branch-2-002.patch, HADOOP-12667.001.patch > > > HBase and other clients rely on the createNonRecursive API, which was > recently un-deprecated. S3A currently does not support it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API
[ https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12667: Attachment: HADOOP-12667-branch-2-002.patch Patch 002 # made changes suggested earlier # test file is {{TestS3AMiscOperations}}; subclass of {{AbstractS3ATestBase}} # split tests cases up # added test: parent dir is file # tests use junit expected=class This method is slower than {{create()}} as there's an extra check. But if HADOOP-13221 fixes create(), then this could bypass the checks and be faster > s3a: Support createNonRecursive API > --- > > Key: HADOOP-12667 > URL: https://issues.apache.org/jira/browse/HADOOP-12667 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-12667-branch-2-002.patch, HADOOP-12667.001.patch > > > HBase and other clients rely on the createNonRecursive API, which was > recently un-deprecated. S3A currently does not support it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
[ https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HADOOP-13164: -- Attachment: HADOOP-13164.branch-2.WIP.patch Attaching the WIP patch for initial review. Changes are mainly related to using async approach to delete the objects in deleteUnnecessaryFakeDirectories. > Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories > > > Key: HADOOP-13164 > URL: https://issues.apache.org/jira/browse/HADOOP-13164 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13164.branch-2.WIP.patch > > > https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224 > deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename > and on outputstream close() to purge any fake directories. Depending on the > nesting in the folder structure, it might take a lot longer time as it > invokes getFileStatus multiple times. Instead, it should be able to break > out of the loop once a non-empty directory is encountered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13329) Dockerfile doesn't work on Linux/ppc
[ https://issues.apache.org/jira/browse/HADOOP-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392124#comment-15392124 ] Amir Sanjar commented on HADOOP-13329: -- OK Allen, thanks > Dockerfile doesn't work on Linux/ppc > > > Key: HADOOP-13329 > URL: https://issues.apache.org/jira/browse/HADOOP-13329 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Amir Sanjar > Attachments: HADOOP-13329.2.patch, HADOOP-13329.3.patch, > HADOOP-13329.patch > > > We need to rework how the Dockerfile is built to support both Linux/x86 and > Linux/PowerPC. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism
[ https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13252: Attachment: HADOOP-13252-branch-2-004.patch Patch 004; rebased to branch 2; the STS test changes had broken the merge > Tune S3A provider plugin mechanism > -- > > Key: HADOOP-13252 > URL: https://issues.apache.org/jira/browse/HADOOP-13252 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13252-branch-2-001.patch, > HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch > > > We've now got some fairly complex auth mechanisms going on: -hadoop config, > KMS, env vars, "none". IF something isn't working, it's going to be a lot > harder to debug. > Review and tune the S3A provider point > * add logging of what's going on in s3 auth to help debug problems > * make a whole chain of logins expressible > * allow the anonymous credentials to be included in the list > * review and updated documents. > I propose *carefully* adding some debug messages to identify which auth > provider is doing the auth, so we can see if the env vars were kicking in, > sysprops, etc. > What we mustn't do is leak any secrets: this should be identifying whether > properties and env vars are set, not what their values are. I don't believe > that this will generate a security risk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism
[ https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13252: Status: Patch Available (was: Open) > Tune S3A provider plugin mechanism > -- > > Key: HADOOP-13252 > URL: https://issues.apache.org/jira/browse/HADOOP-13252 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13252-branch-2-001.patch, > HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch > > > We've now got some fairly complex auth mechanisms going on: -hadoop config, > KMS, env vars, "none". IF something isn't working, it's going to be a lot > harder to debug. > Review and tune the S3A provider point > * add logging of what's going on in s3 auth to help debug problems > * make a whole chain of logins expressible > * allow the anonymous credentials to be included in the list > * review and updated documents. > I propose *carefully* adding some debug messages to identify which auth > provider is doing the auth, so we can see if the env vars were kicking in, > sysprops, etc. > What we mustn't do is leak any secrets: this should be identifying whether > properties and env vars are set, not what their values are. I don't believe > that this will generate a security risk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism
[ https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13252: Status: Open (was: Patch Available) > Tune S3A provider plugin mechanism > -- > > Key: HADOOP-13252 > URL: https://issues.apache.org/jira/browse/HADOOP-13252 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13252-branch-2-001.patch, > HADOOP-13252-branch-2-003.patch > > > We've now got some fairly complex auth mechanisms going on: -hadoop config, > KMS, env vars, "none". IF something isn't working, it's going to be a lot > harder to debug. > Review and tune the S3A provider point > * add logging of what's going on in s3 auth to help debug problems > * make a whole chain of logins expressible > * allow the anonymous credentials to be included in the list > * review and updated documents. > I propose *carefully* adding some debug messages to identify which auth > provider is doing the auth, so we can see if the env vars were kicking in, > sysprops, etc. > What we mustn't do is leak any secrets: this should be identifying whether > properties and env vars are set, not what their values are. I don't believe > that this will generate a security risk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392095#comment-15392095 ] Hudson commented on HADOOP-13188: - SUCCESS: Integrated in Hadoop-trunk-Commit #10144 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10144/]) HADOOP-13188 S3A file-create should throw error rather than overwrite (stevel: rev 86ae218893d018638e937c2528c8e84336254da7) * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractCreate.java > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
[ https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392063#comment-15392063 ] Masatake Iwasaki commented on HADOOP-13017: --- The patch looks good, though the part fixing {{S3InputStream}} needs update. {{HarFsInputStream.read}} and {{WebHdfsInputStream.read}} seems to be able to do fast exit too. Should the issue title say InputStream.read rather than IOStream.read? > Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0 > - > > Key: HADOOP-13017 > URL: https://issues.apache.org/jira/browse/HADOOP-13017 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HDFS-13017-001.patch > > > HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was > no data left in the stream; Java IO says > bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; > otherwise, there is an attempt to read at least one byte. > Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where > necessary and considered safe, add a fast exit if the length is 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13417) Fix javadoc warnings by JDK8 in hadoop-auth package
[ https://issues.apache.org/jira/browse/HADOOP-13417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392080#comment-15392080 ] Wei-Chiu Chuang commented on HADOOP-13417: -- Thanks for filing these jiras. I am converting them to subtasks of HADOOP-13369. > Fix javadoc warnings by JDK8 in hadoop-auth package > --- > > Key: HADOOP-13417 > URL: https://issues.apache.org/jira/browse/HADOOP-13417 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki > Fix For: 3.0.0-alpha2 > > > Fix compile warnings generated after migrating JDK8. > This is a sub-task of HADOOP-13369. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13418) Fix javadoc warnings by JDK8 in hadoop-nfs package
[ https://issues.apache.org/jira/browse/HADOOP-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13418: - Issue Type: Sub-task (was: Improvement) Parent: HADOOP-13369 > Fix javadoc warnings by JDK8 in hadoop-nfs package > -- > > Key: HADOOP-13418 > URL: https://issues.apache.org/jira/browse/HADOOP-13418 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki > > Fix compile warning generated after migrating JDK8. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package
[ https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13419: - Issue Type: Sub-task (was: Improvement) Parent: HADOOP-13369 > Fix javadoc warnings by JDK8 in hadoop-common package > - > > Key: HADOOP-13419 > URL: https://issues.apache.org/jira/browse/HADOOP-13419 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki > > Fix compile warning generated after migrate JDK8. > This is a subtask of HADOOP-13369. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13417) Fix javadoc warnings by JDK8 in hadoop-auth package
[ https://issues.apache.org/jira/browse/HADOOP-13417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13417: - Issue Type: Sub-task (was: Improvement) Parent: HADOOP-13369 > Fix javadoc warnings by JDK8 in hadoop-auth package > --- > > Key: HADOOP-13417 > URL: https://issues.apache.org/jira/browse/HADOOP-13417 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki > Fix For: 3.0.0-alpha2 > > > Fix compile warnings generated after migrating JDK8. > This is a sub-task of HADOOP-13369. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
[ https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392078#comment-15392078 ] Hadoop QA commented on HADOOP-13017: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HADOOP-13017 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12798112/HDFS-13017-001.patch | | JIRA Issue | HADOOP-13017 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10076/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0 > - > > Key: HADOOP-13017 > URL: https://issues.apache.org/jira/browse/HADOOP-13017 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HDFS-13017-001.patch > > > HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was > no data left in the stream; Java IO says > bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; > otherwise, there is an attempt to read at least one byte. > Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where > necessary and considered safe, add a fast exit if the length is 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13188: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) forgot this had been +1'd. retested and committed > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13406) S3AFileSystem: Consider reusing filestatus in delete() and mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-13406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391978#comment-15391978 ] Hudson commented on HADOOP-13406: - ABORTED: Integrated in Hadoop-trunk-Commit #10143 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10143/]) HADOOP-13406 S3AFileSystem: Consider reusing filestatus in delete() and (stevel: rev be9e46b42dd1ed0b2295bd36a7d81d5ee6dffc25) * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > S3AFileSystem: Consider reusing filestatus in delete() and mkdirs() > --- > > Key: HADOOP-13406 > URL: https://issues.apache.org/jira/browse/HADOOP-13406 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13406-branch-2-001.patch, > HADOOP-13406-branch-2-002.patch, HADOOP-13406-branch-2-003.patch > > > filestatus can be reused in rename() and in mkdirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Adding tests for coder utilities
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391988#comment-15391988 ] Hadoop QA commented on HADOOP-13041: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 53s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819920/HADOOP-13041.06.patch | | JIRA Issue | HADOOP-13041 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 47afff4939ee 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7052ca8 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10075/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10075/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10075/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Adding tests for coder utilities > > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch,
[jira] [Updated] (HADOOP-574) want FileSystem implementation for Amazon S3
[ https://issues.apache.org/jira/browse/HADOOP-574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-574: -- Assignee: Tom White > want FileSystem implementation for Amazon S3 > > > Key: HADOOP-574 > URL: https://issues.apache.org/jira/browse/HADOOP-574 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 0.9.0 >Reporter: Doug Cutting >Assignee: Tom White > Fix For: 0.10.0 > > Attachments: HADOOP-574-v2.patch, HADOOP-574-v3.patch, > HADOOP-574.patch, dependencies.zip > > > An S3-based Hadoop FileSystem would make a great addition to Hadoop. > It would facillitate use of Hadoop on Amazon's EC2 computing grid, as > discussed here: > http://www.mail-archive.com/hadoop-user@lucene.apache.org/msg00318.html > This is related to HADOOP-571, which would make Hadoop's FileSystem > considerably easier to extend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13074) TestS3AContractRootDir#testListEmptyRootDirectory fails with java.io.IOException: Root directory operation rejected
[ https://issues.apache.org/jira/browse/HADOOP-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391964#comment-15391964 ] Steve Loughran commented on HADOOP-13074: - Of course. one way to have a test here which guarantees to have an empty root dir is: have a dedicated s3 bucket purely for these tests. This is something which could perhaps be shared, because with no data to publish, it's costs should be nearly $0 > TestS3AContractRootDir#testListEmptyRootDirectory fails with > java.io.IOException: Root directory operation rejected > --- > > Key: HADOOP-13074 > URL: https://issues.apache.org/jira/browse/HADOOP-13074 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.0.0-alpha1 >Reporter: Swagat Behera >Assignee: Swagat Behera > > TestS3AContractRootDir#testListEmptyRootDirectory fails with > java.io.IOException: Root directory operation rejected . > Following > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md, > I had inputted a non empty S3 bucket with appropriate keys. > This test gets the status under the root and tries to delete all the folders > under that iteratively along with the root folder. > When it tries to delete the root folder, > ContractTestUtils::rejectRootOperation() throws "Root directory operation > rejected" exception. This is bound to happen since the allowRootOperation > flag is not set. > [~cnauroth] > Please let me know your comments on this. It seems that this test will always > fail until we start using > ContractTestUtils::assertDeleted(allowRootOperations=True) . > + [~fabbri], [~mackrorysd] for FYI -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12977) s3a ignores delete("/", true)
[ https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391941#comment-15391941 ] Steve Loughran commented on HADOOP-12977: - I know I filed this, but I can see that the s3a root dir tests do try to delete root. > s3a ignores delete("/", true) > - > > Key: HADOOP-12977 > URL: https://issues.apache.org/jira/browse/HADOOP-12977 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > > if you try to delete the root directory on s3a, you get politely but firmly > told you can't > {code} > 2016-03-30 12:01:44,924 INFO s3a.S3AFileSystem > (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory > {code} > The semantics of {{rm -rf "/"}} are defined, they are "delete everything > underneath, while preserving the root dir itself". > # s3a needs to support this. > # this skipped through the FS contract tests in > {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works > or not should be made configurable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12977) s3a ignores delete("/", true)
[ https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391959#comment-15391959 ] Steve Loughran commented on HADOOP-12977: - ... The root directory test {{testRmRootRecursive()}} allows for the operation to return false, meaning "the root dir wasn't deleted"; Looking at the fs spect. it says bq. The POSIX model assumes that if the user has the correct permissions to delete everything, they are free to do so (resulting in an empty filesystem). {code} if isDir(FS, p) and isRoot(p) and recursive : FS' = ({["/"]}, {}, {}, {}) result = True {code} bq. In contrast, HDFS never permits the deletion of the root of a filesystem; the filesystem can be taken offline and reformatted if an empty filesystem is desired. {code} if isDir(FS, p) and isRoot(p) and recursive : FS' = FS result = False {code} So: the s3a logic follows that of HDFS: you can't do {{rm -rf /}}. Yet unlike HDFS, you can't take the fs offline and do a delete. # we need to decide what to do # the contract tests should require the filesystem to declare what they do, follow HDFS or Posix > s3a ignores delete("/", true) > - > > Key: HADOOP-12977 > URL: https://issues.apache.org/jira/browse/HADOOP-12977 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > > if you try to delete the root directory on s3a, you get politely but firmly > told you can't > {code} > 2016-03-30 12:01:44,924 INFO s3a.S3AFileSystem > (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory > {code} > The semantics of {{rm -rf "/"}} are defined, they are "delete everything > underneath, while preserving the root dir itself". > # s3a needs to support this. > # this skipped through the FS contract tests in > {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works > or not should be made configurable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values
[ https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391919#comment-15391919 ] Steve Loughran commented on HADOOP-13405: - I'd do a subclass of {{hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java}}, try to create a filesystem instance with an ACL option set, assert that a file can be created, read then deleted Have a look at {{TestS3AEncryption}} to see how it does an encryption check; Something like TestS3AAcls would be ideal. (Don't feel worried about having to supply code, if you really don't want to do it, I won't mind...it's just that your patch showed up that we aren't testing it at all, which is dangerous) > doc for “fs.s3a.acl.default” indicates incorrect values > --- > > Key: HADOOP-13405 > URL: https://issues.apache.org/jira/browse/HADOOP-13405 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Shen Yinjie >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13405.patch > > > description for "fs.s3a.acl.default" indicates its values are > "private,public-read"; > when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/' > {{-ls: No enum constant > com.amazonaws.services.s3.model.CannedAccessControlList.public-read}} > while in amazon-sdk , > {code} > public enum CannedAccessControlList { > Private("private"), > PublicRead("public-read"), > PublicReadWrite("public-read-write"), > AuthenticatedRead("authenticated-read"), > LogDeliveryWrite("log-delivery-write"), > BucketOwnerRead("bucket-owner-read"), > BucketOwnerFullControl("bucket-owner-full-control"); > {code} > so values should be enum values as "Private","PublicRead"... > attached simple patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13406) S3AFileSystem: Consider reusing filestatus in delete() and mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-13406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13406: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1. Committed. Thanks! > S3AFileSystem: Consider reusing filestatus in delete() and mkdirs() > --- > > Key: HADOOP-13406 > URL: https://issues.apache.org/jira/browse/HADOOP-13406 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13406-branch-2-001.patch, > HADOOP-13406-branch-2-002.patch, HADOOP-13406-branch-2-003.patch > > > filestatus can be reused in rename() and in mkdirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13041) Adding tests for coder utilities
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13041: Attachment: HADOOP-13041.06.patch > Adding tests for coder utilities > > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch, > HADOOP-13041.03.patch, HADOOP-13041.04.patch, HADOOP-13041.05.patch, > HADOOP-13041.06.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package
Kai Sasaki created HADOOP-13419: --- Summary: Fix javadoc warnings by JDK8 in hadoop-common package Key: HADOOP-13419 URL: https://issues.apache.org/jira/browse/HADOOP-13419 Project: Hadoop Common Issue Type: Improvement Reporter: Kai Sasaki Fix compile warning generated after migrate JDK8. This is a subtask of HADOOP-13369. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13418) Fix javadoc warnings by JDK8 in hadoop-nfs package
[ https://issues.apache.org/jira/browse/HADOOP-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13418: Summary: Fix javadoc warnings by JDK8 in hadoop-nfs package (was: Fix ) > Fix javadoc warnings by JDK8 in hadoop-nfs package > -- > > Key: HADOOP-13418 > URL: https://issues.apache.org/jira/browse/HADOOP-13418 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kai Sasaki > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13418) Fix javadoc warnings by JDK8 in hadoop-nfs package
[ https://issues.apache.org/jira/browse/HADOOP-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13418: Description: Fix compile warning generated after migrating JDK8. > Fix javadoc warnings by JDK8 in hadoop-nfs package > -- > > Key: HADOOP-13418 > URL: https://issues.apache.org/jira/browse/HADOOP-13418 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kai Sasaki > > Fix compile warning generated after migrating JDK8. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13418) Fix
Kai Sasaki created HADOOP-13418: --- Summary: Fix Key: HADOOP-13418 URL: https://issues.apache.org/jira/browse/HADOOP-13418 Project: Hadoop Common Issue Type: Improvement Reporter: Kai Sasaki -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13417) Fix javadoc warnings by JDK8 in hadoop-auth package
Kai Sasaki created HADOOP-13417: --- Summary: Fix javadoc warnings by JDK8 in hadoop-auth package Key: HADOOP-13417 URL: https://issues.apache.org/jira/browse/HADOOP-13417 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0-alpha2 Reporter: Kai Sasaki Fix For: 3.0.0-alpha2 Fix compile warnings generated after migrating JDK8. This is a sub-task of HADOOP-13369. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Adding tests for coder utilities
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391838#comment-15391838 ] Hadoop QA commented on HADOOP-13041: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 36s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819909/HADOOP-13041.05.patch | | JIRA Issue | HADOOP-13041 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 15080ab5a1b9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7052ca8 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10074/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10074/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10074/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10074/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Adding tests for coder utilities > > > Key: HADOOP-13041 >
[jira] [Updated] (HADOOP-13041) Adding tests for coder utilities
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13041: Attachment: HADOOP-13041.05.patch > Adding tests for coder utilities > > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch, > HADOOP-13041.03.patch, HADOOP-13041.04.patch, HADOOP-13041.05.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13406) S3AFileSystem: Consider reusing filestatus in delete() and mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-13406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13406: Status: Patch Available (was: Open) > S3AFileSystem: Consider reusing filestatus in delete() and mkdirs() > --- > > Key: HADOOP-13406 > URL: https://issues.apache.org/jira/browse/HADOOP-13406 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13406-branch-2-001.patch, > HADOOP-13406-branch-2-002.patch, HADOOP-13406-branch-2-003.patch > > > filestatus can be reused in rename() and in mkdirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13406) S3AFileSystem: Consider reusing filestatus in delete() and mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-13406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13406: Status: Open (was: Patch Available) > S3AFileSystem: Consider reusing filestatus in delete() and mkdirs() > --- > > Key: HADOOP-13406 > URL: https://issues.apache.org/jira/browse/HADOOP-13406 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13406-branch-2-001.patch, > HADOOP-13406-branch-2-002.patch, HADOOP-13406-branch-2-003.patch > > > filestatus can be reused in rename() and in mkdirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391579#comment-15391579 ] Hadoop QA commented on HADOOP-13410: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 3s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12819873/HADOOP-13410.001.patch | | JIRA Issue | HADOOP-13410 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux eff7a9286295 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7052ca8 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10073/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10073/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Attachments: HADOOP-13410.001.patch > > > Today when you run a "hadoop jar" command,
[jira] [Updated] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HADOOP-13410: Attachment: HADOOP-13410.001.patch > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Attachments: HADOOP-13410.001.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HADOOP-13410: Status: Patch Available (was: Open) > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > Attachments: HADOOP-13410.001.patch > > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath
[ https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391483#comment-15391483 ] Yuanbo Liu commented on HADOOP-13410: - [~sjlee0] Agree with you that there is no need to add the jar path to classpath. I think {{TestRunJar#testClientClassLoader}} has covered the test case, so I did not write a test case for this patch. Hope to get your thoughts. > RunJar adds the content of the jar twice to the classpath > - > > Key: HADOOP-13410 > URL: https://issues.apache.org/jira/browse/HADOOP-13410 > Project: Hadoop Common > Issue Type: Bug > Components: util >Reporter: Sangjin Lee >Assignee: Yuanbo Liu > > Today when you run a "hadoop jar" command, the jar is unzipped to a temporary > location and gets added to the classloader. > However, the original jar itself is still added to the classpath. > {code} > List classPath = new ArrayList<>(); > classPath.add(new File(workDir + "/").toURI().toURL()); > classPath.add(file.toURI().toURL()); > classPath.add(new File(workDir, "classes/").toURI().toURL()); > File[] libs = new File(workDir, "lib").listFiles(); > if (libs != null) { > for (File lib : libs) { > classPath.add(lib.toURI().toURL()); > } > } > {code} > As a result, the contents of the jar are present in the classpath *twice* and > are completely redundant. Although this does not necessarily cause > correctness issues, some stricter code written to require a single presence > of files may fail. > I cannot think of a good reason why the jar should be added to the classpath > if the unjarred content was added to it. I think we should remove the jar > from the classpath. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13041) Adding tests for coder utilities
[ https://issues.apache.org/jira/browse/HADOOP-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391474#comment-15391474 ] Kai Zheng commented on HADOOP-13041: Thanks for the update. Just note that the new test class file is given an empty header comment. Would you mind filling some text? +1 once addressed. Thanks. > Adding tests for coder utilities > > > Key: HADOOP-13041 > URL: https://issues.apache.org/jira/browse/HADOOP-13041 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13041.01.patch, HADOOP-13041.02.patch, > HADOOP-13041.03.patch, HADOOP-13041.04.patch > > > Enhancement missing test for {{CoderUtil}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13416) Hide System properties of Daemon in /jmx output
Vinayakumar B created HADOOP-13416: -- Summary: Hide System properties of Daemon in /jmx output Key: HADOOP-13416 URL: https://issues.apache.org/jira/browse/HADOOP-13416 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Vinayakumar B Assignee: Vinayakumar B Showing system properties of daemon in /jmx, which is not secured url, could show unwanted information to non-admin user. So it would be better to hide these from displaying. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13415) add authentication filters to '/conf' and '/stacks' servlet
Vinayakumar B created HADOOP-13415: -- Summary: add authentication filters to '/conf' and '/stacks' servlet Key: HADOOP-13415 URL: https://issues.apache.org/jira/browse/HADOOP-13415 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Vinayakumar B /conf and /stacks could reveal some security related information (configurations, paths, etc) at server side to the non-admin user. Its better to make them go through authentication. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13413) Update ZooKeeper version to 3.4.9
[ https://issues.apache.org/jira/browse/HADOOP-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-13413: Description: Just a reminder not to update ZooKeeper's version to 3.4.9 for syncing up with netty's version. was:For syncing up ZooKeeper's > Update ZooKeeper version to 3.4.9 > - > > Key: HADOOP-13413 > URL: https://issues.apache.org/jira/browse/HADOOP-13413 > Project: Hadoop Common > Issue Type: Bug >Reporter: Tsuyoshi Ozawa > > Just a reminder not to update ZooKeeper's version to 3.4.9 for syncing up > with netty's version. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper
[ https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391391#comment-15391391 ] Tsuyoshi Ozawa commented on HADOOP-12928: - [~eddyxu] thanks for the clarification. {quote} Zookeeper recently changed netty to 3.10.5.Final as well.. {quote} The ZooKeeper's version which includes the change is 3.4.9. It's been not released(http://zookeeper.apache.org/releases.html). I prefer to update ZooKeeper to 3.4.9 and netty to 3.10.5.Final at the same time after the release of ZooKeeper 3.4.9. It's because netty-3.10.final is binary-incompatible with 3.7.1.final which ZooKeeper 3.4.7, and Hadoop 3.0.0-alpha will be released soon: Hadoop 3.0.0 alpha is the version for getting feedbacks by early adapters, so we should make it stable as possible as we can. > Update netty to 3.10.5.Final to sync with zookeeper > --- > > Key: HADOOP-12928 > URL: https://issues.apache.org/jira/browse/HADOOP-12928 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan >Assignee: Lei (Eddy) Xu > Attachments: HADOOP-12928-branch-2.00.patch, HADOOP-12928.01.patch, > HADOOP-12928.02.patch, HDFS-12928.00.patch > > > Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper > 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927 > Pull request: https://github.com/apache/hadoop/pull/85 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13414) Hide Jetty Server version header in HTTP responses
Vinayakumar B created HADOOP-13414: -- Summary: Hide Jetty Server version header in HTTP responses Key: HADOOP-13414 URL: https://issues.apache.org/jira/browse/HADOOP-13414 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Vinayakumar B Assignee: Vinayakumar B Hide Jetty Server version in HTTP Response header. Some security analyzers would think this as an issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org