[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333144#comment-16333144 ] genericqa commented on HADOOP-12862: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 53s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906913/HADOOP-12862.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux c57556a31473 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2ed9d61 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14009/testReport/ | | Max. process+thread count | 1399 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333084#comment-16333084 ] Wei-Chiu Chuang commented on HADOOP-12862: -- thanks for reminder [~lars_francke] Here's v008 patch to address [~drankye]'s comments. {quote}2. Writing some plain password in configuration file should be discouraged, if allowed; could we go for the password from reading the file first? {quote} First of all I would argue this is incompatible. Second, if user's use credential files, then the current way of getting password will try to load password from credential file first, followed by reading it from configuration file, and finally from the password file. So I think that's good enough for a security perspective. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15093) Deprecation of yarn.resourcemanager.zk-address is undocumented
[ https://issues.apache.org/jira/browse/HADOOP-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-15093: --- Reporter: Namit Maheshwari (was: Eric Wohlstadter) > Deprecation of yarn.resourcemanager.zk-address is undocumented > -- > > Key: HADOOP-15093 > URL: https://issues.apache.org/jira/browse/HADOOP-15093 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0, 3.0.0, 3.1.0 >Reporter: Namit Maheshwari >Assignee: Ajay Kumar >Priority: Major > Labels: documentation > Fix For: 3.1.0 > > Attachments: HADOOP-15093.001.patch > > > "yarn.resourcemanager.zk-address" was deprecated in 2.9.x and moved to > "hadoop.zk.address". However this doesn't appear in Deprecated Properties. > Additionally, the Configuration base class doesn't auto-translate from > "yarn.resourcemanager.zk-address" to "hadoop.zk.address". Only the sub-class > YarnConfiguration does the translation. > Also, the 2.9+ Resource Manager HA documentation still refers to the use of > "yarn.resourcemanager.zk-address". > https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-12862: - Attachment: HADOOP-12862.008.patch > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15166) CLI MiniCluster fails with ClassNotFoundException o.a.h.yarn.server.timelineservice.collector.TimelineCollectorManager
[ https://issues.apache.org/jira/browse/HADOOP-15166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333053#comment-16333053 ] Hudson commented on HADOOP-15166: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13528 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13528/]) HADOOP-15166 CLI MiniCluster fails with ClassNotFoundException (vrushali: rev c191538ed18e12fff157e88a3203b23b20c10d83) * (edit) hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm * (edit) hadoop-mapreduce-project/bin/mapred > CLI MiniCluster fails with ClassNotFoundException > o.a.h.yarn.server.timelineservice.collector.TimelineCollectorManager > -- > > Key: HADOOP-15166 > URL: https://issues.apache.org/jira/browse/HADOOP-15166 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Gera Shegalov >Assignee: Gera Shegalov >Priority: Major > Fix For: 3.0.1 > > Attachments: HADOOP-15166.001.patch > > > Following CLIMiniCluster.md.vm to start minicluster fails due to: > {code} > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 62 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15166) CLI MiniCluster fails with ClassNotFoundException o.a.h.yarn.server.timelineservice.collector.TimelineCollectorManager
[ https://issues.apache.org/jira/browse/HADOOP-15166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C updated HADOOP-15166: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.1 Status: Resolved (was: Patch Available) Thanks [~jira.shegalov] for the patch. Committed to trunk as part of [https://github.com/apache/hadoop/commit/c191538ed18e12fff157e88a3203b23b20c10d83] > CLI MiniCluster fails with ClassNotFoundException > o.a.h.yarn.server.timelineservice.collector.TimelineCollectorManager > -- > > Key: HADOOP-15166 > URL: https://issues.apache.org/jira/browse/HADOOP-15166 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Gera Shegalov >Assignee: Gera Shegalov >Priority: Major > Fix For: 3.0.1 > > Attachments: HADOOP-15166.001.patch > > > Following CLIMiniCluster.md.vm to start minicluster fails due to: > {code} > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorManager > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 62 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333014#comment-16333014 ] Hanisha Koneru commented on HADOOP-15121: - We can check that the {{metricsProxy#delegate}} object is not the same as the current {{DecayRpcScheduler}} object before setting it again. We can do this in \{{MetricsProxy#getInstance()}}. {code:java} if (mp == null) { // We must create one mp = new MetricsProxy(namespace, numLevels, drs); INSTANCES.put(namespace, mp); } else { if (mp.delegate.get() != drs) { mp.setDelegate(drs); } }{code} > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, > HADOOP-15121.006.patch, HADOOP-15121.007.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its >
[jira] [Assigned] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader
[ https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair reassigned HADOOP-14067: -- Assignee: Thejas M Nair > VersionInfo should load version-info.properties from its own classloader > > > Key: HADOOP-14067 > URL: https://issues.apache.org/jira/browse/HADOOP-14067 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Attachments: HADOOP-14067.01.patch > > > org.apache.hadoop.util.VersionInfo loads the version-info.properties file via > the current thread classloader. > However, in case of applications that are using hadoop classes dynamically > (eg jdbc based tools such as SQuirreL SQL) the current thread might not be > the one that loaded the hadoop classes including VersionInfo, and it would > fail to fine the properties file. > The right place to look for the properties file is in the classloader of > VersionInfo class, as right version is the one that is associated with rest > of the loaded hadoop classes, and not necessarily the one in current thread > classloader. > Created a related jira - HADOOP-14066 to make methods to get version via > VersionInfo a public api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14067) VersionInfo should load version-info.properties from its own classloader
[ https://issues.apache.org/jira/browse/HADOOP-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HADOOP-14067: --- Attachment: HADOOP-14067.01.patch > VersionInfo should load version-info.properties from its own classloader > > > Key: HADOOP-14067 > URL: https://issues.apache.org/jira/browse/HADOOP-14067 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Thejas M Nair >Priority: Major > Attachments: HADOOP-14067.01.patch > > > org.apache.hadoop.util.VersionInfo loads the version-info.properties file via > the current thread classloader. > However, in case of applications that are using hadoop classes dynamically > (eg jdbc based tools such as SQuirreL SQL) the current thread might not be > the one that loaded the hadoop classes including VersionInfo, and it would > fail to fine the properties file. > The right place to look for the properties file is in the classloader of > VersionInfo class, as right version is the one that is associated with rest > of the loaded hadoop classes, and not necessarily the one in current thread > classloader. > Created a related jira - HADOOP-14066 to make methods to get version via > VersionInfo a public api. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError
[ https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332968#comment-16332968 ] Aaron Fabbri commented on HADOOP-12502: --- Thank you for the new patch [~vinayrpet] and for working on this. I am having a very busy week but I will try to review it next week. > SetReplication OutOfMemoryError > --- > > Key: HADOOP-12502 > URL: https://issues.apache.org/jira/browse/HADOOP-12502 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.3.0 >Reporter: Philipp Schuegerl >Assignee: Vinayakumar B >Priority: Major > Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, > HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, > HADOOP-12502-06.patch, HADOOP-12502-07.patch, HADOOP-12502-08.patch, > HADOOP-12502-09.patch > > > Setting the replication of a HDFS folder recursively can run out of memory. > E.g. with a large /var/log directory: > hdfs dfs -setrep -R -w 1 /var/log > Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit > exceeded > at java.util.Arrays.copyOfRange(Arrays.java:2694) > at java.lang.String.(String.java:203) > at java.lang.String.substring(String.java:1913) > at java.net.URI$Parser.substring(URI.java:2850) > at java.net.URI$Parser.parse(URI.java:3046) > at java.net.URI.(URI.java:753) > at org.apache.hadoop.fs.Path.initialize(Path.java:203) > at org.apache.hadoop.fs.Path.(Path.java:116) > at org.apache.hadoop.fs.Path.(Path.java:94) > at > org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222) > at > org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708) > at > org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at > org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) > at > org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures
[ https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332879#comment-16332879 ] Arpit Agarwal commented on HADOOP-12897: Couple of minor comments: # We should probably wrap AuthenticationException in addition to IOException. # You can skip this log message. {code} LOG.warn("Unable to wrap exception of type " + exceptionClass + ": it has no (String) constructor", e); {code} # Coding style: Need spaces after try and before catch keywords. > KerberosAuthenticator.authenticate to include URL on IO failures > > > Key: HADOOP-12897 > URL: https://issues.apache.org/jira/browse/HADOOP-12897 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch > > > If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you > get a stack trace, but without the URL it is trying to talk to. > That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} > handler —which can't be called here as its not in the {{hadoop-auth}} module -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14577) ITestS3AInconsistency.testGetFileStatus failing in -DS3guard test runs
[ https://issues.apache.org/jira/browse/HADOOP-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abraham Fine resolved HADOOP-14577. --- Resolution: Cannot Reproduce > ITestS3AInconsistency.testGetFileStatus failing in -DS3guard test runs > -- > > Key: HADOOP-14577 > URL: https://issues.apache.org/jira/browse/HADOOP-14577 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.0.0-beta1 >Reporter: Sean Mackrory >Assignee: Abraham Fine >Priority: Minor > > This test is failing for me when run individually or in parallel (with > -Ds3guard). Even if I revert back to the commit that introduced it. I thought > I had successful test runs on that before and haven't changed anything in my > test configuration. > {code}Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.671 > sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AInconsistency > testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency) Time > elapsed: 4.475 sec <<< FAILURE! > java.lang.AssertionError: S3Guard failed to list parent of inconsistent child. > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:83){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332803#comment-16332803 ] genericqa commented on HADOOP-9747: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 32 new + 189 unchanged - 27 fixed = 221 total (was 216) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 21s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.token.delegation.TestDelegationToken | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-9747 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906860/HADOOP-9747-trunk-03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux f1ad3947b167 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 130f8bc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/14008/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit |
[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava
[ https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332801#comment-16332801 ] genericqa commented on HADOOP-15170: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 54s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15170 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906855/HADOOP-15170.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 620afe4511fb 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 130f8bc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14006/testReport/ | | Max. process+thread count | 1348 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14006/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add symlink support to FileUtil#unTarUsingJava > --- > > Key: HADOOP-15170 >
[jira] [Commented] (HADOOP-15176) Enhance IAM assumed role support in S3A client
[ https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332797#comment-16332797 ] genericqa commented on HADOOP-15176: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 5s{color} | {color:red} root generated 2 new + 1239 unchanged - 2 fixed = 1241 total (was 1241) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 21s{color} | {color:orange} root: The patch generated 33 new + 16 unchanged - 0 fixed = 49 total (was 16) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 6s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 36s{color} | {color:red} hadoop-aws in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-aws | | | Format string should use %n rather than n in org.apache.hadoop.fs.s3a.S3AUtils.translateMultiObjectDeleteException(String, MultiObjectDeleteException) At S3AUtils.java:rather than n in org.apache.hadoop.fs.s3a.S3AUtils.translateMultiObjectDeleteException(String, MultiObjectDeleteException) At S3AUtils.java:[line 409] | | Failed junit tests |
[jira] [Commented] (HADOOP-14918) remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332787#comment-16332787 ] genericqa commented on HADOOP-14918: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 27s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14918 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906858/HADOOP-14918-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332739#comment-16332739 ] Ajay Kumar commented on HADOOP-14788: - [~ste...@apache.org],[~hanishakoneru] thanks for review and commit. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, > HADOOP-14788.006.patch, HADOOP-14788.007.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332737#comment-16332737 ] Ajay Kumar commented on HADOOP-15114: - thanks [~ste...@apache.org],[~arpitagarwal] for review and commit, [~brahmareddy] for reporting. > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 3.1.0 > > Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, > HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, > HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8807) Update README and website to reflect HADOOP-8662
[ https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332711#comment-16332711 ] genericqa commented on HADOOP-8807: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-mapreduce-project . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 4s{color} | {color:orange} root: The patch generated 1 new + 8 unchanged - 1 fixed = 9 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-mapreduce-project . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 3s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}236m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-8807 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906831/HADOOP-8807.01.patch | | Optional Tests | asflicense compile
[jira] [Commented] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.
[ https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332709#comment-16332709 ] Steve Loughran commented on HADOOP-15178: - Do we have a test which tries to wrap an IOE which doesn't have a public string constructor? I propose * process an IOE with no string constructor (expect: downgrade wrap) * process an IOE with a private one > Generalize NetUtils#wrapException to handle other subclasses with String > Constructor. > - > > Key: HADOOP-15178 > URL: https://issues.apache.org/jira/browse/HADOOP-15178 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15178.001.patch > > > NetUtils#wrapException returns an IOException if exception passed to it is > not of type > SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException. > By default, it should always return instance (subclass of IOException) of > same type unless a String constructor is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332706#comment-16332706 ] Daryn Sharp commented on HADOOP-9747: - Will post a branch-2 patch if this version is deemed acceptable. As mentioned before, all "intelligent" studying of the subject is gone. It just tracks explicitly principal, keytab, ticket cache used during login. Ensures those parameters are used during relogin. Removes all the synchronization. Ignores "external" subjects that lack a User principal not created by the UGI. > Reduce unnecessary UGI synchronization > -- > > Key: HADOOP-9747 > URL: https://issues.apache.org/jira/browse/HADOOP-9747 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk.01.patch, > HADOOP-9747-trunk.02.patch, HADOOP-9747.2.branch-2.patch, > HADOOP-9747.2.trunk.patch, HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch > > > Jstacks of heavily loaded NNs show up to dozens of threads blocking in the > UGI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15182) Support to change back to signature version 2 of AWS SDK
[ https://issues.apache.org/jira/browse/HADOOP-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332697#comment-16332697 ] Steve Loughran commented on HADOOP-15182: - OK. I don't want to do any form of version rollback, because it will cause too many other problems. There's a whole history of JIRAs related to versioning things there: follow references of HADOOP-9991. Is there a way to support the V2 signer somehow? As that's what we need > Support to change back to signature version 2 of AWS SDK > > > Key: HADOOP-15182 > URL: https://issues.apache.org/jira/browse/HADOOP-15182 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.9.0 > Environment: > > >Reporter: Yonger >Priority: Minor > > Current s3a depend on aws-java-sdk-bundle-1.11.199 which use signature v4. > So for some s3-compatible system(Ceph) which still using v2, Hadoop can't > work on them. > s3cmd can use v2 via specify option like : > {code:java} > s3cmd --signature-v2 ls s3://xxx/{code} > > maybe we can add a parameter to allow back to use signature v2 in s3a. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential
[ https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332683#comment-16332683 ] Steve Loughran commented on HADOOP-15158: - Test wise, no need for {{testGetRoleredentialProvider() }} to catch exceptions and fail, just have the test declare that it throws exceptions. Makes for a simpler test with no loss of stack trace > AliyunOSS: Supports role based credential > - > > Key: HADOOP-15158 > URL: https://issues.apache.org/jira/browse/HADOOP-15158 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1 > > Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, > HADOOP-15158.003.patch, HADOOP-15158.004.patch > > > Currently, AliyunCredentialsProvider supports credential by > configuration(core-site.xml). Sometimes, admin wants to create different > temporary credential(key/secret/token) for different roles so that one role > cannot read data that belongs to another role. > So, our code should support pass in the URI when creates an > XXXCredentialsProvider so that we can get user info(role) from the URI -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential
[ https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332678#comment-16332678 ] Steve Loughran commented on HADOOP-15158: - You should have a look @ HADOOP-15141, and see if you can do similar stuff related to tests & docs > AliyunOSS: Supports role based credential > - > > Key: HADOOP-15158 > URL: https://issues.apache.org/jira/browse/HADOOP-15158 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Fix For: 3.1.0, 2.9.1, 3.0.1 > > Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, > HADOOP-15158.003.patch, HADOOP-15158.004.patch > > > Currently, AliyunCredentialsProvider supports credential by > configuration(core-site.xml). Sometimes, admin wants to create different > temporary credential(key/secret/token) for different roles so that one role > cannot read data that belongs to another role. > So, our code should support pass in the URI when creates an > XXXCredentialsProvider so that we can get user info(role) from the URI -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-9747: Attachment: HADOOP-9747-trunk-03.patch > Reduce unnecessary UGI synchronization > -- > > Key: HADOOP-9747 > URL: https://issues.apache.org/jira/browse/HADOOP-9747 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk.01.patch, > HADOOP-9747-trunk.02.patch, HADOOP-9747.2.branch-2.patch, > HADOOP-9747.2.trunk.patch, HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch > > > Jstacks of heavily loaded NNs show up to dozens of threads blocking in the > UGI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13134) WASB's file delete still throwing Blob not found exception
[ https://issues.apache.org/jira/browse/HADOOP-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332677#comment-16332677 ] Steve Loughran commented on HADOOP-13134: - # Thats unrelated, start with a JIRA for spark and see where it goes. # deleteOnExit isn't called on VM exit, it's called on close of the FS instance. If you aren't using the FS cache, then you have to manage the FS lifecycle itself and call close() when its time. Looks like Wasb is being helpful by closing the fs, which is then closing staging entries. # There's no finalize() call in s3a. I'm a fan of them because you can never be sure when they're called. But I'm also not a fan of the entire FS shutdown process, as it can get into a mess when you are shutting down to network errors and the filesystems are trying to run through their lists of things to delete. Given that any form of clean termination is a best-effort outcome, I'd argue for writing code which doesn't rely on it. Personally, I'd turn caching back on. You gain from reuse of the worker thread pools and avoid all startup delays, which matters as every worker thread in a spark query is going to be calling fs.get() on init, at the very least. It's a major perf killer. You've just found a different problem > WASB's file delete still throwing Blob not found exception > -- > > Key: HADOOP-13134 > URL: https://issues.apache.org/jira/browse/HADOOP-13134 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.7.1 >Reporter: Lin Chan >Assignee: Thomas Marquardt >Priority: Major > > WASB is still throwing blob not found exception as shown in the following > stack. Need to catch that and convert to Boolean return code in WASB delete. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14918: Status: Patch Available (was: Open) Patch 002, in sync with trunk again. Also, changed name of profile of the "auth" profile from "non-auth" to "auth", after all, it is. testing s3 ireland with/without s3guard dynamodb > remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0, 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14918: Attachment: HADOOP-14918-002.patch > remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava
[ https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332639#comment-16332639 ] Ajay Kumar commented on HADOOP-15170: - fixed checkstyle issues. > Add symlink support to FileUtil#unTarUsingJava > --- > > Key: HADOOP-15170 > URL: https://issues.apache.org/jira/browse/HADOOP-15170 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Reporter: Jason Lowe >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch > > > Now that JDK7 or later is required, we can leverage > java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support > archives that contain symbolic links. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava
[ https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15170: Attachment: HADOOP-15170.002.patch > Add symlink support to FileUtil#unTarUsingJava > --- > > Key: HADOOP-15170 > URL: https://issues.apache.org/jira/browse/HADOOP-15170 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Reporter: Jason Lowe >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch > > > Now that JDK7 or later is required, we can leverage > java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support > archives that contain symbolic links. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14918: Status: Open (was: Patch Available) > remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0, 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-14918-001.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename
[ https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332594#comment-16332594 ] Steve Loughran commented on HADOOP-15183: - Regarding fixes, I think * rename() need to track the files copied, and even if an exception (all? or just AccessDenied?) is raised, update the metastore with the copied files anyway. Or: always add the new files, and then after delete succeeds, delete the old ones. That is, a non-atomic update of the s3guard table. * delete: on a multipart delete exception, split out the successful entries, removed them. Leave the undeleted ones alone. > S3Guard store becomes inconsistent after partial failure of rename > -- > > Key: HADOOP-15183 > URL: https://issues.apache.org/jira/browse/HADOOP-15183 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Major > Attachments: org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt > > > If an S3A rename() operation fails partway through, such as when the user > doesn't have permissions to delete the source files after copying to the > destination, then the s3guard view of the world ends up inconsistent. In > particular the sequence > (assuming src/file* is a list of files file1...file10 and read only to > caller) > > # create file rename src/file1 dest/ ; expect AccessDeniedException in the > delete, dest/file1 will exist > # delete file dest/file1 > # rename src/file* dest/ ; expect failure > # list dest; you will not see dest/file1 > You will not see file1 in the listing, presumably because it will have a > tombstone marker and the update at the end of the rename() didn't take place: > the old data is still there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename
[ https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15183: Description: If an S3A rename() operation fails partway through, such as when the user doesn't have permissions to delete the source files after copying to the destination, then the s3guard view of the world ends up inconsistent. In particular the sequence (assuming src/file* is a list of files file1...file10 and read only to caller) # create file rename src/file1 dest/ ; expect AccessDeniedException in the delete, dest/file1 will exist # delete file dest/file1 # rename src/file* dest/ ; expect failure # list dest; you will not see dest/file1 You will not see file1 in the listing, presumably because it will have a tombstone marker and the update at the end of the rename() didn't take place: the old data is still there. was: If an S3A rename() operation fails partway through, such as when the user doesn't have permissions to delete the source files after copying to the destination, then the s3guard view of the world ends up inconsistent. In particular the sequence (assuming src/file* is a list of files file1...file10 and read only to caller) # create file dest/file1 # delete file dest/file1 # rename src/file* dest/ You will not see file1 in the listing, because it will have a tombstone marker and the update at the end of the rename() didn't take place: the old data is still there. > S3Guard store becomes inconsistent after partial failure of rename > -- > > Key: HADOOP-15183 > URL: https://issues.apache.org/jira/browse/HADOOP-15183 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Major > Attachments: org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt > > > If an S3A rename() operation fails partway through, such as when the user > doesn't have permissions to delete the source files after copying to the > destination, then the s3guard view of the world ends up inconsistent. In > particular the sequence > (assuming src/file* is a list of files file1...file10 and read only to > caller) > > # create file rename src/file1 dest/ ; expect AccessDeniedException in the > delete, dest/file1 will exist > # delete file dest/file1 > # rename src/file* dest/ ; expect failure > # list dest; you will not see dest/file1 > You will not see file1 in the listing, presumably because it will have a > tombstone marker and the update at the end of the rename() didn't take place: > the old data is still there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename
[ https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332584#comment-16332584 ] Steve Loughran commented on HADOOP-15183: - Attached: debug level log of operations leading to failure. Search for "Renaming readonly files" to get the directory rename sequence which is failing. One of the files being renamed "readonlyChild" was unsuccessfully renamed earlier and then destDir/readonlyChild deleted > S3Guard store becomes inconsistent after partial failure of rename > -- > > Key: HADOOP-15183 > URL: https://issues.apache.org/jira/browse/HADOOP-15183 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Major > Attachments: org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt > > > If an S3A rename() operation fails partway through, such as when the user > doesn't have permissions to delete the source files after copying to the > destination, then the s3guard view of the world ends up inconsistent. In > particular the sequence > (assuming src/file* is a list of files file1...file10 and read only to > caller) > > # create file dest/file1 > # delete file dest/file1 > # rename src/file* dest/ > You will not see file1 in the listing, because it will have a tombstone > marker and the update at the end of the rename() didn't take place: the old > data is still there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename
[ https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15183: Attachment: org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt > S3Guard store becomes inconsistent after partial failure of rename > -- > > Key: HADOOP-15183 > URL: https://issues.apache.org/jira/browse/HADOOP-15183 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Major > Attachments: org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt > > > If an S3A rename() operation fails partway through, such as when the user > doesn't have permissions to delete the source files after copying to the > destination, then the s3guard view of the world ends up inconsistent. In > particular the sequence > (assuming src/file* is a list of files file1...file10 and read only to > caller) > > # create file dest/file1 > # delete file dest/file1 > # rename src/file* dest/ > You will not see file1 in the listing, because it will have a tombstone > marker and the update at the end of the rename() didn't take place: the old > data is still there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15176) Enhance IAM assumed role support in S3A client
[ https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332577#comment-16332577 ] Steve Loughran commented on HADOOP-15176: - Regarding the current patch, after yetus has bashed through its style issues, it's ready for review, even though S3Guard is failing. I'd propose committing even though the renames fail so that we know s3guard is broken there > Enhance IAM assumed role support in S3A client > -- > > Key: HADOOP-15176 > URL: https://issues.apache.org/jira/browse/HADOOP-15176 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.0 > Environment: >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15176-001.patch, HADOOP-15176-002.patch > > > Followup HADOOP-15141 with > * Code to generate basic AWS json policies somewhat declaratively (no hand > coded strings) > * Tests to simulate users with different permissions down the path of a > single bucket > * test-driven changes to S3A client to handle user without full write up the > FS tree > * move the new authenticator into the s3a sub-package "auth", where we can > put more auth stuff (that base s3a package is getting way too big) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15176) Enhance IAM assumed role support in S3A client
[ https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15176: Status: Patch Available (was: Open) > Enhance IAM assumed role support in S3A client > -- > > Key: HADOOP-15176 > URL: https://issues.apache.org/jira/browse/HADOOP-15176 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.0 > Environment: >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15176-001.patch, HADOOP-15176-002.patch > > > Followup HADOOP-15141 with > * Code to generate basic AWS json policies somewhat declaratively (no hand > coded strings) > * Tests to simulate users with different permissions down the path of a > single bucket > * test-driven changes to S3A client to handle user without full write up the > FS tree > * move the new authenticator into the s3a sub-package "auth", where we can > put more auth stuff (that base s3a package is getting way too big) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15176) Enhance IAM assumed role support in S3A client
[ https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332572#comment-16332572 ] Steve Loughran commented on HADOOP-15176: - Patch 002 Fails with S3guard + DDB enabled, because of HADOOP-15183: when the delete operation after/during a rename() raises an exception, DDB isn't updated with the current state of the store, and, if there were tombstone markers in the dest directory whose filenames match the newly created ones, well, you don't get the new files in the listing. * {{S3AUtils.translateMultiObjectDeleteException()}} can look inside a multi object delete response (200 + list of failed deletes) and extract details. If the any of the failures was AccessDenied, the ex becomes an AccessDeniedException. Otherwise its an AWSS3IOException with a full list of the failed paths and error codes * With {{translateMultiObjectDeleteException}} working, permission failures in delete calls in delete() and rename() correctly raise an {{AccessDeniedException}} * {{S3AFileSystem.delete()}} downgrades failure to mkdir the parent marker to a warn; * {{S3AFileSystem.deleteObjects}} to log the details of a multi object delete at debug only. * test for various operations being correctly denied with both single and multi deletes enabled: renames, deletes, commit calls * Found, Fixed bug with error reporting in {{CommitOperations.abortAllSinglePendingCommits}} (i.e. it wasn't). * LambdaTestUtils has a new method, {{eval(Callable)}} which wraps any raised checked exceptions with an AssertionError. This makes it straighforward to use FS API calls in Java 8 streams, especially the parallel streams, which significantly speedup things like creation of 10 test file. + tests, obviously. * ITestAssumedRoleCommitOperations subclasses ITestaCommitOperations and runs under an assumed role with a policy of RW only permitted under the test directory. Ensures that we are choosing the right permissions and nothing is being written to other paths. * remove duplicate properties in core-default.xml, review text. * assumed_role.md has a section on policies: what's required for read and write * Special section there on "why mixing permissions on different paths will complicate your life" Testing, S3 Ireland. Without S3Guard, All good. With S3Guard, renames of read only file tests fail, for single delete and multiple delete HTTP calls {code} java.lang.AssertionError: files copied to the destination: expected 11 files in s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest but got 10 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-1 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-10 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-2 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-3 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-4 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-5 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-6 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-7 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-8 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-9 at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.assertFileCount(ITestAssumeRole.java:766) at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.executeRenameReadOnlyData(ITestAssumeRole.java:559) at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testRestrictedRenameReadOnlySingleDelete(ITestAssumeRole.java:484) [ERROR] testRestrictedRenameReadOnlyData(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole) Time elapsed: 5.036 s <<< FAILURE! java.lang.AssertionError: files copied to the destination: expected 11 files in s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest but got 10 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-1 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-10 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-2 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-3 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-4 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-5 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-6 s3a://hwdev-steve-ireland-new/test/testRestrictedRenameReadOnlyData/renameDest/file-7
[jira] [Updated] (HADOOP-15176) Enhance IAM assumed role support in S3A client
[ https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15176: Status: Open (was: Patch Available) > Enhance IAM assumed role support in S3A client > -- > > Key: HADOOP-15176 > URL: https://issues.apache.org/jira/browse/HADOOP-15176 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.0 > Environment: >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15176-001.patch, HADOOP-15176-002.patch > > > Followup HADOOP-15141 with > * Code to generate basic AWS json policies somewhat declaratively (no hand > coded strings) > * Tests to simulate users with different permissions down the path of a > single bucket > * test-driven changes to S3A client to handle user without full write up the > FS tree > * move the new authenticator into the s3a sub-package "auth", where we can > put more auth stuff (that base s3a package is getting way too big) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15176) Enhance IAM assumed role support in S3A client
[ https://issues.apache.org/jira/browse/HADOOP-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15176: Attachment: HADOOP-15176-002.patch > Enhance IAM assumed role support in S3A client > -- > > Key: HADOOP-15176 > URL: https://issues.apache.org/jira/browse/HADOOP-15176 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.0 > Environment: >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15176-001.patch, HADOOP-15176-002.patch > > > Followup HADOOP-15141 with > * Code to generate basic AWS json policies somewhat declaratively (no hand > coded strings) > * Tests to simulate users with different permissions down the path of a > single bucket > * test-driven changes to S3A client to handle user without full write up the > FS tree > * move the new authenticator into the s3a sub-package "auth", where we can > put more auth stuff (that base s3a package is getting way too big) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename
Steve Loughran created HADOOP-15183: --- Summary: S3Guard store becomes inconsistent after partial failure of rename Key: HADOOP-15183 URL: https://issues.apache.org/jira/browse/HADOOP-15183 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.0.0 Reporter: Steve Loughran If an S3A rename() operation fails partway through, such as when the user doesn't have permissions to delete the source files after copying to the destination, then the s3guard view of the world ends up inconsistent. In particular the sequence (assuming src/file* is a list of files file1...file10 and read only to caller) # create file dest/file1 # delete file dest/file1 # rename src/file* dest/ You will not see file1 in the listing, because it will have a tombstone marker and the update at the end of the rename() didn't take place: the old data is still there. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332549#comment-16332549 ] genericqa commented on HADOOP-12862: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 22 unchanged - 2 fixed = 22 total (was 24) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 12s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 56s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12794837/HADOOP-12862.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 0a6153c3f9bd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d689b2d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/14004/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results |
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332410#comment-16332410 ] Lars Francke commented on HADOOP-12862: --- Bumping this. [~jojochuang] if you don't have time to work on this would you mind me giving it a stab? > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332400#comment-16332400 ] Hudson commented on HADOOP-15114: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13523 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13523/]) HADOOP-15114. Add closeStreams(...) to IOUtils (addendum). Contributed (stevel: rev d689b2d99c7b4d7e587225638dd8f5af0a690dcc) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 3.1.0 > > Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, > HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, > HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9157) Better option for curl in hadoop-auth-examples
[ https://issues.apache.org/jira/browse/HADOOP-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332377#comment-16332377 ] genericqa commented on HADOOP-9157: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 28m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-9157 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906826/HADOOP-9157.01.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 51f74c39ed06 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c5bbd64 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 340 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14002/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Better option for curl in hadoop-auth-examples > -- > > Key: HADOOP-9157 > URL: https://issues.apache.org/jira/browse/HADOOP-9157 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation > Environment: Ubuntu 12.04 >Reporter: Jingguo Yao >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-9157.01.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > In http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html, there is > "curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt > http://localhost:8080/hadoop-auth-examples/kerberos/who;. A better way is to > use "-u :" instead of "-u foo". With the use of the former option, curl will > not prompt for a password. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332370#comment-16332370 ] Hudson commented on HADOOP-14788: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13522 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13522/]) HADOOP-14788. Credentials readTokenStorageFile to stop wrapping IOEs in (stevel: rev e5a1ad6e24807b166a40d1332c889c2c4cb4c733) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, > HADOOP-14788.006.patch, HADOOP-14788.007.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils
[ https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15114: Resolution: Fixed Status: Resolved (was: Patch Available) +1, committed > Add closeStreams(...) to IOUtils > > > Key: HADOOP-15114 > URL: https://issues.apache.org/jira/browse/HADOOP-15114 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Fix For: 3.1.0 > > Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, > HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, > HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch > > > Add closeStreams(...) in IOUtils. Originally suggested by [Jason > Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14788: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) +1, committed to trunk * not cherrypicked to branch-3 as there was some resolution conflict with the test changes from HADOOP-15114 * and not in branch-2 as it would also need switching to java 7 > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, > HADOOP-14788.006.patch, HADOOP-14788.007.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14577) ITestS3AInconsistency.testGetFileStatus failing in -DS3guard test runs
[ https://issues.apache.org/jira/browse/HADOOP-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332341#comment-16332341 ] Steve Loughran commented on HADOOP-14577: - I haven't seen it for a while either. Let's close as cannot reproduce and see if it comes back > ITestS3AInconsistency.testGetFileStatus failing in -DS3guard test runs > -- > > Key: HADOOP-14577 > URL: https://issues.apache.org/jira/browse/HADOOP-14577 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.0.0-beta1 >Reporter: Sean Mackrory >Assignee: Abraham Fine >Priority: Minor > > This test is failing for me when run individually or in parallel (with > -Ds3guard). Even if I revert back to the commit that introduced it. I thought > I had successful test runs on that before and haven't changed anything in my > test configuration. > {code}Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.671 > sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AInconsistency > testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency) Time > elapsed: 4.475 sec <<< FAILURE! > java.lang.AssertionError: S3Guard failed to list parent of inconsistent child. > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:83){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8807) Update README and website to reflect HADOOP-8662
[ https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332325#comment-16332325 ] Andras Bokor commented on HADOOP-8807: -- I found only at few places. > Update README and website to reflect HADOOP-8662 > > > Key: HADOOP-8807 > URL: https://issues.apache.org/jira/browse/HADOOP-8807 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Eli Collins >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-8807.01.patch > > > HADOOP-8662 removed the various tabs from the website. Our top-level > README.txt and the generated docs refer to them (eg hadoop.apache.org/core, > /hdfs etc). Let's fix that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-8807) Update README and website to reflect HADOOP-8662
[ https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor reassigned HADOOP-8807: Assignee: Andras Bokor > Update README and website to reflect HADOOP-8662 > > > Key: HADOOP-8807 > URL: https://issues.apache.org/jira/browse/HADOOP-8807 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Eli Collins >Assignee: Andras Bokor >Priority: Major > > HADOOP-8662 removed the various tabs from the website. Our top-level > README.txt and the generated docs refer to them (eg hadoop.apache.org/core, > /hdfs etc). Let's fix that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-8807) Update README and website to reflect HADOOP-8662
[ https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-8807: - Status: Patch Available (was: Open) > Update README and website to reflect HADOOP-8662 > > > Key: HADOOP-8807 > URL: https://issues.apache.org/jira/browse/HADOOP-8807 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Eli Collins >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-8807.01.patch > > > HADOOP-8662 removed the various tabs from the website. Our top-level > README.txt and the generated docs refer to them (eg hadoop.apache.org/core, > /hdfs etc). Let's fix that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-8807) Update README and website to reflect HADOOP-8662
[ https://issues.apache.org/jira/browse/HADOOP-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-8807: - Attachment: HADOOP-8807.01.patch > Update README and website to reflect HADOOP-8662 > > > Key: HADOOP-8807 > URL: https://issues.apache.org/jira/browse/HADOOP-8807 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Eli Collins >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-8807.01.patch > > > HADOOP-8662 removed the various tabs from the website. Our top-level > README.txt and the generated docs refer to them (eg hadoop.apache.org/core, > /hdfs etc). Let's fix that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-6567) The error messages for authentication and authorization failures can be improved
[ https://issues.apache.org/jira/browse/HADOOP-6567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-6567. -- Resolution: Won't Fix It's a 8 years old ticket without exact ideas/instructions. Most probably it will never be fixed. > The error messages for authentication and authorization failures can be > improved > > > Key: HADOOP-6567 > URL: https://issues.apache.org/jira/browse/HADOOP-6567 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 0.22.0 >Reporter: Devaraj Das >Priority: Major > Labels: newbie > > The error messages in case of authentication and authorization failures can > be improved and made more structured. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-8112) Add new test class to org.apache.hadoop.ipc.Client
[ https://issues.apache.org/jira/browse/HADOOP-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-8112. -- Resolution: Won't Fix {\{org.apache.hadoop.ipc.Client}} is tested through other classes. > Add new test class to org.apache.hadoop.ipc.Client > -- > > Key: HADOOP-8112 > URL: https://issues.apache.org/jira/browse/HADOOP-8112 > Project: Hadoop Common > Issue Type: Test > Components: ipc >Affects Versions: 0.23.0 >Reporter: Shingo Furuyama >Priority: Major > > Test class of org.apache.hadoop.ipc.Client dose not exists. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-7565) TestSecureIOUtils and TestTFileSeqFileComparison are flaky
[ https://issues.apache.org/jira/browse/HADOOP-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-7565. -- Resolution: Cannot Reproduce > TestSecureIOUtils and TestTFileSeqFileComparison are flaky > -- > > Key: HADOOP-7565 > URL: https://issues.apache.org/jira/browse/HADOOP-7565 > Project: Hadoop Common > Issue Type: Bug > Components: io >Reporter: Arun C Murthy >Assignee: Andras Bokor >Priority: Critical > > https://builds.apache.org/job/Hadoop-Common-trunk-Commit/764/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-7565) TestSecureIOUtils and TestTFileSeqFileComparison are flaky
[ https://issues.apache.org/jira/browse/HADOOP-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor reassigned HADOOP-7565: Assignee: Andras Bokor > TestSecureIOUtils and TestTFileSeqFileComparison are flaky > -- > > Key: HADOOP-7565 > URL: https://issues.apache.org/jira/browse/HADOOP-7565 > Project: Hadoop Common > Issue Type: Bug > Components: io >Reporter: Arun C Murthy >Assignee: Andras Bokor >Priority: Critical > > https://builds.apache.org/job/Hadoop-Common-trunk-Commit/764/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9157) Better option for curl in hadoop-auth-examples
[ https://issues.apache.org/jira/browse/HADOOP-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-9157: - Status: Patch Available (was: Open) I agree. Also, with localhost the examples did not work for me. I had to use FQDN. Patch 01 fixes these issues. > Better option for curl in hadoop-auth-examples > -- > > Key: HADOOP-9157 > URL: https://issues.apache.org/jira/browse/HADOOP-9157 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation > Environment: Ubuntu 12.04 >Reporter: Jingguo Yao >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-9157.01.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > In http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html, there is > "curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt > http://localhost:8080/hadoop-auth-examples/kerberos/who;. A better way is to > use "-u :" instead of "-u foo". With the use of the former option, curl will > not prompt for a password. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9157) Better option for curl in hadoop-auth-examples
[ https://issues.apache.org/jira/browse/HADOOP-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-9157: - Attachment: HADOOP-9157.01.patch > Better option for curl in hadoop-auth-examples > -- > > Key: HADOOP-9157 > URL: https://issues.apache.org/jira/browse/HADOOP-9157 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation > Environment: Ubuntu 12.04 >Reporter: Jingguo Yao >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-9157.01.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > In http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html, there is > "curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt > http://localhost:8080/hadoop-auth-examples/kerberos/who;. A better way is to > use "-u :" instead of "-u foo". With the use of the former option, curl will > not prompt for a password. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-9157) Better option for curl in hadoop-auth-examples
[ https://issues.apache.org/jira/browse/HADOOP-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor reassigned HADOOP-9157: Assignee: Andras Bokor > Better option for curl in hadoop-auth-examples > -- > > Key: HADOOP-9157 > URL: https://issues.apache.org/jira/browse/HADOOP-9157 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation > Environment: Ubuntu 12.04 >Reporter: Jingguo Yao >Assignee: Andras Bokor >Priority: Minor > Original Estimate: 1h > Remaining Estimate: 1h > > In http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html, there is > "curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt > http://localhost:8080/hadoop-auth-examples/kerberos/who;. A better way is to > use "-u :" instead of "-u foo". With the use of the former option, curl will > not prompt for a password. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable
[ https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332284#comment-16332284 ] genericqa commented on HADOOP-14951: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 5s{color} | {color:orange} root: The patch generated 6 new + 99 unchanged - 17 fixed = 105 total (was 116) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common-project_hadoop-kms generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 55s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}195m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14951 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896141/HADOOP-14951-7.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 664ec18ab183 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk
[jira] [Commented] (HADOOP-15014) KMS should log the IP address of the clients
[ https://issues.apache.org/jira/browse/HADOOP-15014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332178#comment-16332178 ] genericqa commented on HADOOP-15014: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 59s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15014 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896138/HADOOP-15015-3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6c8821aa1d3f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c5bbd64 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14000/testReport/ | | Max. process+thread count | 301 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14000/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > KMS should log the IP address of the clients > > > Key: HADOOP-15014 > URL:
[jira] [Commented] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.
[ https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332122#comment-16332122 ] genericqa commented on HADOOP-15151: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 36s{color} | {color:green} root generated 0 new + 1240 unchanged - 1 fixed = 1240 total (was 1241) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 125 unchanged - 1 fixed = 125 total (was 126) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 44s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15151 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906791/HADOOP-15151.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3261f7e8683a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9e4f52d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13999/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13999/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/13999/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 1360 (vs.
[jira] [Commented] (HADOOP-15014) KMS should log the IP address of the clients
[ https://issues.apache.org/jira/browse/HADOOP-15014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332092#comment-16332092 ] Zsombor Gegesy commented on HADOOP-15014: - What do you think, any comment or objections against ? > KMS should log the IP address of the clients > > > Key: HADOOP-15014 > URL: https://issues.apache.org/jira/browse/HADOOP-15014 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Affects Versions: 2.8.1 >Reporter: Zsombor Gegesy >Priority: Major > Labels: kms, log > Attachments: HADOOP-15015-3.patch > > > Currently KMSMDCFilter only captures http request url and method, but not the > remote address of the client. > Storing this information in a thread local variable would help external > authorizer plugins to do more thorough checks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable
[ https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332091#comment-16332091 ] Zsombor Gegesy commented on HADOOP-14951: - Any comment or objections ? > KMSACL implementation is not configurable > - > > Key: HADOOP-14951 > URL: https://issues.apache.org/jira/browse/HADOOP-14951 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Zsombor Gegesy >Priority: Major > Labels: key-management, kms > Attachments: HADOOP-14951-7.patch > > > Currently, it is not possible to customize KMS's key management, if KMSACLs > behaviour is not enough. If an external key management solution is used, that > would need a higher level API, where it can decide, if the given operation is > allowed, or not. > For this to achieve, it would be a solution, to introduce a new interface, > which could be implemented by KMSACLs - and also other KMS - and a new > configuration point could be added, where the actual interface implementation > could be specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332035#comment-16332035 ] Elek, Marton commented on HADOOP-14163: --- # Yes. It works exactly the same. I just attached the source, because the generatted html files should be commited to a different branch according to this proposal. But I uploaded them to here: [https://github.com/elek/hadoop-site-proposal/tree/gh-pages] # I found a very long comparison at here [https://opensource.com/article/17/5/hugo-vs-jekyll] I used jekyll a few years ago, so I am not so familiar with the latest version, but my biggest problem was exactly the same what [~aw] is noted. It was difficult to maintain the right version of ruby with the right version of gems when my blog is updated only a few times per year (even if I used rvm). I think the two biggest advantage of Hugo are: ## It is just a single binary for every platform, no dependencies. Very easy to run even if the release manager is no familiar with ruby/python/npm any other language/environment. Download and execute, nothing more. ## I also like how Hugo structures the content. Everything under the content/release directory (eg. this file: [https://raw.githubusercontent.com/elek/hadoop-site-proposal/master/content/release/2.8.1.md)] could be handled in a different way not just as a news entry. This is a very easy way to handle releases. The release script/release manager should generate only a simple md file with a header, nothing more. More scriptable, easier to maintain. # Good question, I have never thought about it. I doesn't seems to be supported (eg. [https://github.com/gohugoio/hugo/issues/1430)], but hopefully it could be handled by an external tool. (Or during a pre-commit yetus check?) > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, > HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, > HADOOP-14163.006.patch, HADOOP-14163.007.patch, HADOOP-14163.008.tar.gz, > hadoop-site.tar.gz, hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.
[ https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grigori Rybkine updated HADOOP-15151: - Status: Patch Available (was: Open) Re-submit the latest patch to have the tests re-run. > MapFile.fix creates a wrong index file in case of block-compressed data file. > - > > Key: HADOOP-15151 > URL: https://issues.apache.org/jira/browse/HADOOP-15151 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Grigori Rybkine >Priority: Major > Labels: patch > Attachments: HADOOP-15151.001.patch, HADOOP-15151.002.patch, > HADOOP-15151.003.patch, HADOOP-15151.004.patch, HADOOP-15151.004.patch > > > Index file created with MapFile.fix for an ordered block-compressed data file > does not allow to find values for keys existing in the data file via the > MapFile.get method. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.
[ https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grigori Rybkine updated HADOOP-15151: - Attachment: HADOOP-15151.004.patch > MapFile.fix creates a wrong index file in case of block-compressed data file. > - > > Key: HADOOP-15151 > URL: https://issues.apache.org/jira/browse/HADOOP-15151 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Grigori Rybkine >Priority: Major > Labels: patch > Attachments: HADOOP-15151.001.patch, HADOOP-15151.002.patch, > HADOOP-15151.003.patch, HADOOP-15151.004.patch, HADOOP-15151.004.patch > > > Index file created with MapFile.fix for an ordered block-compressed data file > does not allow to find values for keys existing in the data file via the > MapFile.get method. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.
[ https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grigori Rybkine updated HADOOP-15151: - Status: Open (was: Patch Available) > MapFile.fix creates a wrong index file in case of block-compressed data file. > - > > Key: HADOOP-15151 > URL: https://issues.apache.org/jira/browse/HADOOP-15151 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Grigori Rybkine >Priority: Major > Labels: patch > Attachments: HADOOP-15151.001.patch, HADOOP-15151.002.patch, > HADOOP-15151.003.patch, HADOOP-15151.004.patch, HADOOP-15151.004.patch > > > Index file created with MapFile.fix for an ordered block-compressed data file > does not allow to find values for keys existing in the data file via the > MapFile.get method. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15039) Move SemaphoredDelegatingExecutor to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331974#comment-16331974 ] SammiChen commented on HADOOP-15039: Hi [~uncleGen], HADOOP-15027 is depend on this JIRA. I would like to commit this JIRA's content into "branch-3", "branch-3.0", "branch-2" and "branch-2.9". I thought about directly cherry-pick the commit to other branches, then I saw there is code change in {{S3AFileSystem}}. so would you please rebase the patch against these 4 branches, rerun involved S3 test cases, then upload 4 new patches, follow the patch name pattern "-..patch" ? > Move SemaphoredDelegatingExecutor to hadoop-common > -- > > Key: HADOOP-15039 > URL: https://issues.apache.org/jira/browse/HADOOP-15039 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/oss, fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-15039.001.patch, HADOOP-15039.002.patch, > HADOOP-15039.003.patch, HADOOP-15039.004.patch, HADOOP-15039.005.patch > > > Detailed discussions in HADOOP-14999 and HADOOP-15027. > share {{SemaphoredDelegatingExecutor}} and move it to {{hadoop-common}}. > cc [~ste...@apache.org] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance
[ https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331960#comment-16331960 ] SammiChen commented on HADOOP-15027: Thanks [~jlowe] for the notification. I will help to commit HADOOP-15039 to other branches first. > AliyunOSS: Support multi-thread pre-read to improve sequential read from > Hadoop to Aliyun OSS performance > - > > Key: HADOOP-15027 > URL: https://issues.apache.org/jira/browse/HADOOP-15027 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Fix For: 3.1.0 > > Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, > HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, > HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, > HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, > HADOOP-15027.012.patch, HADOOP-15027.013.patch, HADOOP-15027.014.patch > > > Currently, AliyunOSSInputStream uses single thread to read data from > AliyunOSS, so we can do some refactoring by using multi-thread pre-read to > improve read performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org