[jira] [Created] (HADOOP-7434) there is no error message displayed while using the command "daemonlog -setlevel" with illegal level

2011-06-29 Thread JIRA
there is no error message displayed while using the command "daemonlog 
-setlevel" with illegal level


 Key: HADOOP-7434
 URL: https://issues.apache.org/jira/browse/HADOOP-7434
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: 严金双


While using the command with inexistent "level" like "nomsg", there is no error 
message displayed,and the level "DEBUG" is set by default.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-16168) mvn clean site is not compiling in trunk

2019-04-03 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-16168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-16168.
--
Resolution: Not A Problem

> mvn clean site is not compiling in trunk
> 
>
> Key: HADOOP-16168
> URL: https://issues.apache.org/jira/browse/HADOOP-16168
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: Adam Antal
>Assignee: Fengnan Li
>Priority: Blocker
>
> This is a follow-up Jira for HDFS-14118.
> {{mvn clean site}} is not compiling on trunk with the following error message:
> {noformat}
> [INFO] -
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[23,29]
>  cannot find symbol
>   symbol:   class MockDomainNameResolver
>   location: package org.apache.hadoop.net
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[149,11]
>  cannot find symbol
>   symbol:   variable MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[150,11]
>  cannot find symbol
>   symbol:   variable MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[162,9]
>  cannot find symbol
>   symbol:   class MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[261,9]
>  cannot find symbol
>   symbol:   variable MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[263,9]
>  cannot find symbol
>   symbol:   variable MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[288,19]
>  cannot find symbol
>   symbol:   variable MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> [ERROR] 
> /Users/adamantal/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[292,19]
>  cannot find symbol
>   symbol:   variable MockDomainNameResolver
>   location: class 
> org.apache.hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider
> {noformat}
> {{MockDomainNameResolver}} is in 
> {{hadoop-common-project/hadoop-common/src/test}} while 
> {{TestConfiguredFailoverProxyProvider}} is in 
> {{hadoop-hdfs-project/hadoop-hdfs-client/src/test}}.
>  Though we have the following dependency:
> {noformat}
> 
>   org.apache.hadoop
>   hadoop-common
>   test
>   test-jar
> 
> {noformat}
> probably that's not enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16342) BUILDING.txt is unclear on where to run Eclipse script

2019-06-01 Thread JIRA
Erkin Alp Güney created HADOOP-16342:


 Summary: BUILDING.txt is unclear on where to run Eclipse script
 Key: HADOOP-16342
 URL: https://issues.apache.org/jira/browse/HADOOP-16342
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.2.0
 Environment: Ubuntu 19.04 (real hardware)
Reporter: Erkin Alp Güney


{quote}
 Then, generate eclipse project files.
 $ mvn eclipse:eclipse -DskipTests
At last, import to eclipse by specifying the root directory of the project via
[File] > [Import] > [Existing Projects into Workspace].

{quote}

Documentation is unclear on which directory to run mvn eclipse:eclipse at. I 
tried running at hadoop root directory, it resulted in Eclipse import failures, 
using eithe r Java or M2E project. However, the project builds successfully in 
the docker environment supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16577) Build fails as can't retrieve websocket-servlet

2019-09-15 Thread Jira
Erkin Alp Güney created HADOOP-16577:


 Summary: Build fails as can't retrieve websocket-servlet
 Key: HADOOP-16577
 URL: https://issues.apache.org/jira/browse/HADOOP-16577
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Erkin Alp Güney


I encountered this error when building Hadoop:
Downloading: 
https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
Sep 15, 2019 7:54:39 AM 
org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
execute
INFO: I/O exception 
(org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) 
caught when processing request to {s}->https://repository.apache.org:443: The 
target server failed to respond
Sep 15, 2019 7:54:39 AM 
org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
execute




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16577) Build fails as can't retrieve websocket-servlet

2019-09-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erkin Alp Güney resolved HADOOP-16577.
--
Resolution: Done

> Build fails as can't retrieve websocket-servlet
> ---
>
> Key: HADOOP-16577
> URL: https://issues.apache.org/jira/browse/HADOOP-16577
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Erkin Alp Güney
>Priority: Blocker
>  Labels: dependencies
>
> I encountered this error when building Hadoop:
> Downloading: 
> https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute
> INFO: I/O exception 
> (org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) 
> caught when processing request to {s}->https://repository.apache.org:443: The 
> target server failed to respond
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16592) Build fails as can't retrieve websocket-server-impl

2019-09-21 Thread Jira
Erkin Alp Güney created HADOOP-16592:


 Summary: Build fails as can't retrieve websocket-server-impl
 Key: HADOOP-16592
 URL: https://issues.apache.org/jira/browse/HADOOP-16592
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Erkin Alp Güney


[ERROR] Failed to execute goal on project hadoop-yarn-server-nodemanager: Could 
not resolve dependencies for project 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT: The 
following artifacts could not be resolved: 
org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.27.v20190418, 
org.eclipse.jetty:jetty-annotations:jar:9.3.27.v20190418, 
org.eclipse.jetty:jetty-plus:jar:9.3.27.v20190418, 
org.eclipse.jetty:jetty-jndi:jar:9.3.27.v20190418, 
org.eclipse.jetty.websocket:javax-websocket-client-impl:jar:9.3.27.v20190418, 
org.eclipse.jetty.websocket:websocket-client:jar:9.3.27.v20190418, 
org.eclipse.jetty.websocket:websocket-server:jar:9.3.27.v20190418, 
org.eclipse.jetty.websocket:websocket-common:jar:9.3.27.v20190418, 
org.eclipse.jetty.websocket:websocket-api:jar:9.3.27.v20190418, 
org.eclipse.jetty.websocket:websocket-servlet:jar:9.3.27.v20190418: Could not 
transfer artifact 
org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.3.27.v20190418 
from/to apache.snapshots.https 
(https://repository.apache.org/content/repositories/snapshots): 
repository.apache.org: Unknown host repository.apache.org -> [Help 1]
Again, the same as HADOOP-16577, but this time with websocket-server-impl.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16577) Build fails as can't retrieve websocket-servlet

2019-09-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erkin Alp Güney reopened HADOOP-16577:
--

It appeared again.

> Build fails as can't retrieve websocket-servlet
> ---
>
> Key: HADOOP-16577
> URL: https://issues.apache.org/jira/browse/HADOOP-16577
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Erkin Alp Güney
>Priority: Blocker
>  Labels: build, dependencies
>
> I encountered this error when building Hadoop:
> Downloading: 
> https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute
> INFO: I/O exception 
> (org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) 
> caught when processing request to {s}->https://repository.apache.org:443: The 
> target server failed to respond
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16763) Make Curator 4 run in soft-compatibility mode with ZooKeeper 3.4

2019-12-13 Thread Jira
Íñigo Goiri created HADOOP-16763:


 Summary: Make Curator 4 run in soft-compatibility mode with 
ZooKeeper 3.4
 Key: HADOOP-16763
 URL: https://issues.apache.org/jira/browse/HADOOP-16763
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Íñigo Goiri


HADOOP-16579 changed Curator to 4.2 and ZooKeeper to 3.5.
This change relate to the client libraries used by the components.
However, the ensemble in most deployments is 3.4 (default in Ubuntu for 
example).
To allow this mode, there is a soft-compatibility mode described in 
http://curator.apache.org/zk-compatibility.html
We should enable this soft-compatibility mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16765) Fix curator dependencies for gradle projects using hadoop-minicluster

2019-12-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-16765.
--
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix curator dependencies for gradle projects using hadoop-minicluster
> -
>
> Key: HADOOP-16765
> URL: https://issues.apache.org/jira/browse/HADOOP-16765
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mate Szalay-Beko
>Assignee: Mate Szalay-Beko
>Priority: Major
> Fix For: 3.3.0
>
>
> *The Problem:*
> The Kudu unit tests that use the `MiniDFSCluster` are broken due to a guava 
> dependency issue in the `hadoop-minicluster` module.
> {code:java}
> java.lang.NoSuchMethodError: 
> com.google.common.util.concurrent.Futures.addCallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureCallback;)V
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker.addResultCachingCallback(ThrottledAsyncChecker.java:167)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker.schedule(ThrottledAsyncChecker.java:156)
> at 
> org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:166)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2794)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2709)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1669)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:911)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> at 
> org.apache.kudu.backup.HDFSTestKuduBackupLister.setUp(TestKuduBackupLister.scala:216)
> {code}
> The issue in that change is that even though Guava was excluded from the 
> `curator-client` module, just below that the `curator-framework` module is 
> defined and doesn't exclude Gauva:
> [https://github.com/apache/hadoop/blob/fc97034b29243a0509633849de55aa734859/hadoop-project/pom.xml#L1391-L1414]
> This causes Guava 27.0.1-jre to be pulled in instead of Guava 11.0.2 defined 
> by Hadoop:
> {noformat}
> +--- org.apache.hadoop:hadoop-minicluster:3.1.1.7.1.0.0-SNAPSHOT
> |+--- org.apache.hadoop:hadoop-common:3.1.1.7.1.0.0-SNAPSHOT
> ||+--- org.apache.hadoop:hadoop-annotations:3.1.1.7.1.0.0-SNAPSHOT
> ||+--- com.google.guava:guava:11.0.2 -> 27.0.1-jre
> {noformat}
> {noformat}
> +--- org.apache.curator:curator-framework:4.2.0
> |\--- org.apache.curator:curator-client:4.2.0
> | +--- org.apache.zookeeper:zookeeper:3.5.4-beta -> 
> 3.5.5.7.1.0.0-SNAPSHOT (*)
> | +--- com.google.guava:guava:27.0.1-jre (*)
> | \--- org.slf4j:slf4j-api:1.7.25{noformat}
>  
> *The root cause:*
> I was able to reproduce this issue with some dummy projects, see 
> [https://github.com/symat/transitive-dependency-test]
> It seems that gradle behaves in this case differently than maven. If someone 
> is using maven, then he will not see this problem, as the exclude rules 
> defined for the {{curator-client}} will be enforced even if the 
> {{curator-client}} comes transitively through the {{curator-framework}}. 
> While using the hadoop-minicluster in a gradle project will lead to this 
> problem (unless extra excludes / dependencies gets defined in the gradle 
> project).
> *The proposed solution* is to add the exclude rules for all Curator 
> dependencies, preventing other gradle projects using Hadoop from breaking 
> because of the Curator upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16819) Possible inconsistent state of AbstractDelegationTokenSecretManager

2020-01-21 Thread Jira
Hankó Gergely created HADOOP-16819:
--

 Summary: Possible inconsistent state of 
AbstractDelegationTokenSecretManager
 Key: HADOOP-16819
 URL: https://issues.apache.org/jira/browse/HADOOP-16819
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Hankó Gergely


 
[AbstractDelegationTokenSecretManager.updateCurrentKey|[https://github.com/apache/hadoop/blob/581072a8f04f7568d3560f105fd1988d3acc9e54/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java#L360]]
 increments the current key id and creates the new delegation key in two 
distinct synchronized blocks.

This means that other threads can see the class in an *inconsistent state, 
where the key for the current key id doesn't exist (yet)*.

For example the following method sometimes returns null when the token remover 
thread is between the two synchronized blocks:
{noformat}
@Override
public DelegationKey getCurrentKey() {
  return getDelegationKey(getCurrentKeyId());
}{noformat}
 

Also it is possible that updateCurrentKey is called from multiple threads at 
the same time so *distinct keys can be generated with the same key id*.

 

This issue is suspected to be the cause of the intermittent failure of  
[TestLlapSignerImpl.testSigning|[https://github.com/apache/hive/blob/3c0705eaf5121c7b61f2dbe9db9545c3926f26f1/llap-server/src/test/org/apache/hadoop/hive/llap/security/TestLlapSignerImpl.java#L195]]
 - HIVE-22621.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7

2020-02-19 Thread Jira
Norbert Kalmár created HADOOP-16873:
---

 Summary: Upgrade to Apache ZooKeeper 3.5.7
 Key: HADOOP-16873
 URL: https://issues.apache.org/jira/browse/HADOOP-16873
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Norbert Kalmár






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16929) ARM Compile Scripts only work for AArch64, not AArch32

2020-03-20 Thread Jira
Maximilian Böther created HADOOP-16929:
--

 Summary: ARM Compile Scripts only work for AArch64, not AArch32
 Key: HADOOP-16929
 URL: https://issues.apache.org/jira/browse/HADOOP-16929
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Maximilian Böther


The dockerfile added in HADOOP-16797 only works for AArch32. The detection 
itself only recognizes 64-bit ARM architectures as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-16951.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17009) Embrace Immutability of Java Collections

2020-06-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17009.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Embrace Immutability of Java Collections
> 
>
> Key: HADOOP-17009
> URL: https://issues.apache.org/jira/browse/HADOOP-17009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17378) java.lang.NoClassDefFoundError: org/apache/hadoop/tracing/SpanReceiverHost

2020-11-12 Thread Jira
정진영 created HADOOP-17378:


 Summary: java.lang.NoClassDefFoundError: 
org/apache/hadoop/tracing/SpanReceiverHost
 Key: HADOOP-17378
 URL: https://issues.apache.org/jira/browse/HADOOP-17378
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 3.0.0
 Environment: those are I'm using librarys.

compile 'org.apache.maven.plugins:maven-shade-plugin:2.4.3'
compile 'org.apache.hadoop:hadoop-common:3.0.0'
compile 'org.apache.flume.flume-ng-sinks:flume-hdfs-sink:1.9.0'
compile 'org.apache.flume.flume-ng-sources:flume-kafka-source:1.9.0'
compile 'org.apache.hbase:hbase-client:2.1.0'
compile 'org.apache.flume.flume-ng-sinks:flume-ng-hbase-sink:1.9.0'
compile 'redis.clients:jedis:2.9.0'
compile 'org.apache.kafka:kafka-clients:0.10.2.1'
compile 'org.apache.hadoop:hadoop-client:3.0.0'
compile 'org.apache.hive:hive-exec:2.1.1'
compile 'org.mariadb.jdbc:mariadb-java-client:1.6.1'
compileOnly 'org.apache.flume:flume-ng-core:1.9.0'

compile group: 'org.apache.kafka', name: 'kafka_2.10', version:'0.10.2.1'
compile group: 'org.apache.kudu', name: 'kudu-client', version:'1.10.0'
compile group: 'org.apache.flume', name: 'flume-ng-configuration', 
version:'1.9.0'
compile group: 'org.apache.yetus', name: 'audience-annotations', version:'0.4.0'
compile group: 'org.apache.avro', name: 'avro', version:'1.8.2'
compile group: 'org.slf4j', name: 'slf4j-api', version:'1.7.25'
compile group: 'org.postgresql', name: 'postgresql', version:'42.1.4.jre7'
compile group: 'org.apache.maven.plugins', name: 'maven-resources-plugin', 
version:'2.6'
testCompile group: 'junit', name: 'junit', version: '4.12'
compile group: 'org.apache.parquet', name: 'parquet-hadoop-bundle', version: 
'1.9.0'
compile group: 'org.apache.hive', name: 'hive-jdbc', version: '2.1.1'
Reporter: 정진영


Hi. I need help.

now Im trying to migration Cloudera flume to Apache flume. 

(which means no use XXX _chd5.16 any more)

during the test, when i stored data in HDFS i faced this problem below.

java.lang.NoClassDefFoundError: 
org/apache/hadoop/tracing/SpanReceiverHostjava.lang.NoClassDefFoundError: 
org/apache/hadoop/tracing/SpanReceiverHost at 
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:634) at 
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619) at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2816) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) at 
org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at 
com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:238)
 at 
com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:230)
 at 
com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:675)
 at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
 at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:46) at 
com.poscoict.posframe.bdp.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:672)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)Caused by: 
java.lang.ClassNotFoundException: org.apache.hadoop.tracing.SpanReceiverHost at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) 

 

And i don't know fixed it .

please help me. 

thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17465) Update Dockerfile to use Focal

2021-01-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17465.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update Dockerfile to use Focal
> --
>
> Key: HADOOP-17465
> URL: https://issues.apache.org/jira/browse/HADOOP-17465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.4.0
> Environment: Ubuntu
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> Referring to the current Dockerfile, it seems like the toolchain provided by 
> Ubuntu Bionic isn't on track with the versions of the libraries that are 
> needed. Thus, we are separately installing Boost and other libraries (gcc-9 
> and CMake 3.19 needed by HDFS-15740). All these won't be necessary if we 
> upgrade to Focal as these library versions are part of the Focal toolchain 
> itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17500) S3A doesn't calculate Content-MD5 on uploads

2021-01-27 Thread Jira
Pedro Tôrres created HADOOP-17500:
-

 Summary: S3A doesn't calculate Content-MD5 on uploads
 Key: HADOOP-17500
 URL: https://issues.apache.org/jira/browse/HADOOP-17500
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Pedro Tôrres


Hadoop doesn't specify the Content-MD5 of an object when uploading it to an S3 
Bucket. This prevents uploads to buckets with Object Lock, that require the 
Content-MD5 to be specified.

 
{code:java}
com.amazonaws.services.s3.model.AmazonS3Exception: Content-MD5 HTTP header is 
required for Put Part requests with Object Lock parameters (Service: Amazon S3; 
Status Code: 400; Error Code: InvalidRequest; Request ID: ; S3 
Extended Request ID: 
; 
Proxy: null), S3 Extended Request ID: 

at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1372)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5248)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5195)
at 
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3768)
at 
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3753)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:2230)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$uploadPart$8(WriteOperationHelper.java:558)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
... 15 more{code}
 

Similar to https://issues.apache.org/jira/browse/JCLOUDS-1549

Related to https://issues.apache.org/jira/browse/HADOOP-13076



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-14 Thread JIRA
Íñigo Goiri created HADOOP-14773:


 Summary: Extend ZKCuratorManager API
 Key: HADOOP-14773
 URL: https://issues.apache.org/jira/browse/HADOOP-14773
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14921) Conflicts when starting daemons with the same name

2017-10-02 Thread JIRA
Íñigo Goiri created HADOOP-14921:


 Summary: Conflicts when starting daemons with the same name
 Key: HADOOP-14921
 URL: https://issues.apache.org/jira/browse/HADOOP-14921
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14939) Update project release notes with HDFS-10467 for 3.0.0

2017-10-09 Thread JIRA
Íñigo Goiri created HADOOP-14939:


 Summary: Update project release notes with HDFS-10467 for 3.0.0
 Key: HADOOP-14939
 URL: https://issues.apache.org/jira/browse/HADOOP-14939
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15308) TestConfiguration fails on Windows because of paths

2018-03-12 Thread JIRA
Íñigo Goiri created HADOOP-15308:


 Summary: TestConfiguration fails on Windows because of paths
 Key: HADOOP-15308
 URL: https://issues.apache.org/jira/browse/HADOOP-15308
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri


We are seeing multiple failures with:
{code}
Illegal character in authority at index 7: 
file://C:\_work\10\s\hadoop-common-project\hadoop-common\.\test-config-uri-TestConfiguration.xml
{code}
We seem to not be managing the colon of the drive path properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15394) Backport PowerShell NodeFencer HADOOP-14309 to branch-2

2018-04-17 Thread JIRA
Íñigo Goiri created HADOOP-15394:


 Summary: Backport PowerShell NodeFencer HADOOP-14309 to branch-2
 Key: HADOOP-15394
 URL: https://issues.apache.org/jira/browse/HADOOP-15394
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


HADOOP-14309 added PowerShell NodeFencer.
We should backport it to branch-2 and branch-2.9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15412) Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"

2018-04-25 Thread JIRA
Pablo San José created HADOOP-15412:
---

 Summary: Hadoop KMS with HDFS keystore: No FileSystem for scheme 
"hdfs"
 Key: HADOOP-15412
 URL: https://issues.apache.org/jira/browse/HADOOP-15412
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.9.0, 2.7.2
 Environment: RHEL 7.3

Hadoop 2.7.2 and 2.7.9

 
Reporter: Pablo San José


I have been trying to configure the Hadoop kms to use hdfs as the key provider 
but it seems that this functionality is failing. 

I followed the Hadoop docs for that matter, and I added the following field to 
my kms-site.xml:
{code:java}
 
   hadoop.kms.key.provider.uri
   jceks://h...@nn1.example.com/kms/test.jceks 

  URI of the backing KeyProvider for the KMS. 

{code}
That route exists in hdfs, and I expect the kms to create the file test.jceks 
for its keystore. However, the kms failed to start due to this error:
{code:java}
ERROR: Hadoop KMS could not be started REASON: 
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
"hdfs" Stacktrace: --- 
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
"hdfs" at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220) at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at 
org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:132)
 at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
 at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
 at 
org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96) 
at 
org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
 at 
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
 at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779) 
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803) 
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780) at 
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583) at 
org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080) at 
org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003) 
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507) at 
org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322) at 
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325) at 
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
 at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069) at 
org.apache.catalina.core.StandardHost.start(StandardHost.java:822) at 
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061) at 
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at 
org.apache.catalina.core.StandardService.start(StandardService.java:525) at 
org.apache.catalina.core.StandardServer.start(StandardServer.java:761) at 
org.apache.catalina.startup.Catalina.start(Catalina.java:595) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at 
org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414){code}
 
For what I could manage to understand, it seems that this error is because 
there is no FileSystem implemented for HDFS. I have looked up this error but it 
always refers to a lack of jars for the hdfs-client when upgrading, which I 
have not done (it is a fresh installation). I have tested it using Hadoop 2.7.2 
and 2.9.0

Thank you in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread JIRA
Íñigo Goiri created HADOOP-15465:


 Summary: Use native java code for symlinks
 Key: HADOOP-15465
 URL: https://issues.apache.org/jira/browse/HADOOP-15465
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Íñigo Goiri


Hadoop uses the shell to create symbolic links. Now that Hadoop relies on Java 
7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15532) TestBasicDiskValidator fails with NoSuchFileException

2018-06-12 Thread JIRA
Íñigo Goiri created HADOOP-15532:


 Summary: TestBasicDiskValidator fails with NoSuchFileException
 Key: HADOOP-15532
 URL: https://issues.apache.org/jira/browse/HADOOP-15532
 Project: Hadoop Common
  Issue Type: Test
Reporter: Íñigo Goiri
Assignee: Giovanni Matteo Fumarola


TestBasicDiskValidator is failing with NoSuchFileException once in a while.
The daily Linux build shows the error 
[here|https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/809/testReport/org.apache.hadoop.util/TestBasicDiskValidator/].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15578) GridmixTestUtils uses the wrong staging directory in windows

2018-07-02 Thread JIRA
Íñigo Goiri created HADOOP-15578:


 Summary: GridmixTestUtils uses the wrong staging directory in 
windows
 Key: HADOOP-15578
 URL: https://issues.apache.org/jira/browse/HADOOP-15578
 Project: Hadoop Common
  Issue Type: Test
Reporter: Íñigo Goiri


{{GridmixTestUtils#createHomeAndStagingDirectory}} gets the staging area from 
the configuration key {{mapreduce.jobtracker.staging.root.dir}}. This variable 
depends on {{hadoop.tmp.dir}} which in Windows is set to a local Windows 
folder. When the test tries to create the path in HDFS it gets an error because 
the path is not compliant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15629) Missing trimming in readlink in case of protocol

2018-07-24 Thread JIRA
Íñigo Goiri created HADOOP-15629:


 Summary: Missing trimming in readlink in case of protocol
 Key: HADOOP-15629
 URL: https://issues.apache.org/jira/browse/HADOOP-15629
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Íñigo Goiri


When extending the unit tests for the links, we surfaced errors in readLink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15774) Discovery of HA servers

2018-09-19 Thread JIRA
Íñigo Goiri created HADOOP-15774:


 Summary: Discovery of HA servers
 Key: HADOOP-15774
 URL: https://issues.apache.org/jira/browse/HADOOP-15774
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Íñigo Goiri


Currently, Hadoop relies on configuration files to specify the servers.
This requires maintaining these configuration files and propagating the changes.
Hadoop should have a framework to provide discovery.
For example, in HDFS, we could define the Namenodes in a shared location and 
the DNs would use the framework to find the Namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-04 Thread JIRA
Íñigo Goiri created HADOOP-15821:


 Summary: Move Hadoop YARN Registry to Hadoop Registry
 Key: HADOOP-15821
 URL: https://issues.apache.org/jira/browse/HADOOP-15821
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri


Currently, Hadoop YARN Registry is in YARN. However, this can be used by other 
parts of the project (e.g., HDFS). In addition, it does not have any real 
dependency to YARN.

We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15836) Review of AccessControlList

2018-10-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reopened HADOOP-15836:
--

> Review of AccessControlList
> ---
>
> Key: HADOOP-15836
> URL: https://issues.apache.org/jira/browse/HADOOP-15836
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15836.1.patch
>
>
> * Improve unit tests (expected / actual were backwards)
> * Unit test expected elements to be in order but the class's return 
> Collections were unordered
> * Formatting cleanup
> * Removed superfluous white space
> * Remove use of LinkedList
> * Removed superfluous code
> * Use {{unmodifiable}} Collections where JavaDoc states that caller must not 
> manipulate the data structure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-29 Thread JIRA
Íñigo Goiri created HADOOP-15885:


 Summary: Add base64 (urlString) support to DTUtil
 Key: HADOOP-15885
 URL: https://issues.apache.org/jira/browse/HADOOP-15885
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Íñigo Goiri


HADOOP-12563 added a utility to manage Delegation Token files. Currently, it 
supports Java and Protobuf formats. However, When interacting with WebHDFS, we 
use base64. In addition, when printing a token, we also print the base64 value. 
We should be able to import base64 tokens in the utility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15903) Allow HttpServer2 to discover resources in /static when symlinks are used

2018-11-05 Thread JIRA
Íñigo Goiri created HADOOP-15903:


 Summary: Allow HttpServer2 to discover resources in /static when 
symlinks are used
 Key: HADOOP-15903
 URL: https://issues.apache.org/jira/browse/HADOOP-15903
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


Currently, we instantiate /static with the default settings.
However, if this folder is behind a symbolic link, this won't load.l
This is exactly the same issue and solution as described in GEODE-5445.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15852) Refactor QuotaUsage

2018-12-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reopened HADOOP-15852:
--

> Refactor QuotaUsage
> ---
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-8466) hadoop-client POM incorrectly excludes avro

2012-06-01 Thread JIRA
Bruno Mahé created HADOOP-8466:
--

 Summary: hadoop-client POM incorrectly excludes avro
 Key: HADOOP-8466
 URL: https://issues.apache.org/jira/browse/HADOOP-8466
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Bruno Mahé
Assignee: Bruno Mahé


avro is used by Serializers initialization, thus it must be there

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8715) Pipes cannot use Hbase as input

2012-08-21 Thread JIRA
Håvard Wahl Kongsgård created HADOOP-8715:
-

 Summary: Pipes cannot use Hbase as input
 Key: HADOOP-8715
 URL: https://issues.apache.org/jira/browse/HADOOP-8715
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
 Environment: Ubuntu 10.04, Sun Java 1.6.0_31, Cloudera Hbase 
0.90.6-cdh3u5
Reporter: Håvard Wahl Kongsgård


Using pipes with hbase as input does not seem to work. I don't get any errors 
and the job is never added to the jobtracker.

hadoop pipes -conf myconf_job.conf -input name_of_table -output /tmp/out


mapred.input.format.class
org.apache.hadoop.hbase.mapred.TableInputFormat



  hadoop.pipes.java.recordreader
  true



hbase.mapred.tablecolumns
col_fam:name


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8715) Pipes cannot use Hbase as input

2012-08-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Håvard Wahl Kongsgård resolved HADOOP-8715.
---

Resolution: Fixed

This was a issue with zookeeper

> Pipes cannot use Hbase as input
> ---
>
> Key: HADOOP-8715
> URL: https://issues.apache.org/jira/browse/HADOOP-8715
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04, Sun Java 1.6.0_31, Cloudera Hbase 
> 0.90.6-cdh3u5
>Reporter: Håvard Wahl Kongsgård
>
> Using pipes with hbase as input does not seem to work. I don't get any errors 
> and the job is never added to the jobtracker.
> hadoop pipes -conf myconf_job.conf -input name_of_table -output /tmp/out
> 
> mapred.input.format.class
> org.apache.hadoop.hbase.mapred.TableInputFormat
> 
> 
>   hadoop.pipes.java.recordreader
>   true
> 
> 
> hbase.mapred.tablecolumns
> col_fam:name
> 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-3420) Recover the deprecated mapred.tasktracker.tasks.maximum

2012-12-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-3420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iván de Prado resolved HADOOP-3420.
---

Resolution: Won't Fix

Seems too old and not very relevant now.

> Recover the deprecated mapred.tasktracker.tasks.maximum
> ---
>
> Key: HADOOP-3420
> URL: https://issues.apache.org/jira/browse/HADOOP-3420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.16.0, 0.16.1, 0.16.2, 0.16.3, 0.16.4
>Reporter: Iván de Prado
>
> https://issues.apache.org/jira/browse/HADOOP-1274 replaced the configuration 
> attribute mapred.tasktracker.tasks.maximum with 
> mapred.tasktracker.map.tasks.maximum and 
> mapred.tasktracker.reduce.tasks.maximum because it sometimes make sense to 
> have more mappers than reducers assigned to each node.
> But deprecating mapred.tasktracker.tasks.maximum could be an issue in some 
> situations. For example, when more than one job is running, reduce tasks + 
> map tasks eat too many resources. For avoid this cases an upper limit of 
> tasks is needed. So I propose to have the configuration parameter 
> mapred.tasktracker.tasks.maximum as a total limit of task. It is compatible 
> with mapred.tasktracker.map.tasks.maximum and 
> mapred.tasktracker.reduce.tasks.maximum.
> As an example:
> I have a 8 cores, 4GB, 4 nodes cluster. I want to limit the number of tasks 
> per node to 8. 8 tasks per nodes would use almost 100% cpu and 4 GB of the 
> memory. I have set:
>   mapred.tasktracker.map.tasks.maximum -> 8
>   mapred.tasktracker.reduce.tasks.maximum -> 8 
> 1) When running only one Job at the same time, it works smoothly: 8 task 
> average per node, no swapping in nodes, almost 4 GB of memory usage and 100% 
> of CPU usage. 
> 2) When running more than one Job at the same time, it works really bad: 16 
> tasks average per node, 8 GB usage of memory (4 GB swapped), and a lot of 
> System CPU usage.
> So, I think that have sense to restore the old attribute 
> mapred.tasktracker.tasks.maximum making it compatible with the new ones.
> Task trackers could not:
>  - run more than mapred.tasktracker.tasks.maximum tasks per node,
>  - run more than mapred.tasktracker.map.tasks.maximum mappers per node, 
>  - run more than mapred.tasktracker.reduce.tasks.maximum reducers per node. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9462) ulimit output is displayed on stdout each time I start a daemon.

2013-04-06 Thread JIRA
Bruno Mahé created HADOOP-9462:
--

 Summary: ulimit output is displayed on stdout each time I start a 
daemon.
 Key: HADOOP-9462
 URL: https://issues.apache.org/jira/browse/HADOOP-9462
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bruno Mahé


{noformat}
[root@master ~]# /etc/init.d/hadoop-hdfs-namenode start
Starting Hadoop namenode:  [  OK  ]
starting namenode, logging to 
/var/log/hadoop-hdfs/hadoop-hdfs-namenode-master.out
ulimit -a for user hdfs
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 30731
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 32768
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 65536
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
{noformat}

This should be displayed on daemon startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9462) ulimit output is displayed on stdout each time I start a daemon.

2013-04-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Mahé resolved HADOOP-9462.


Resolution: Duplicate

Duplicate of HADOOP-9379.

> ulimit output is displayed on stdout each time I start a daemon.
> 
>
> Key: HADOOP-9462
> URL: https://issues.apache.org/jira/browse/HADOOP-9462
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bruno Mahé
>
> {noformat}
> [root@master ~]# /etc/init.d/hadoop-hdfs-namenode start
> Starting Hadoop namenode:  [  OK  ]
> starting namenode, logging to 
> /var/log/hadoop-hdfs/hadoop-hdfs-namenode-master.out
> ulimit -a for user hdfs
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 30731
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 32768
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 10240
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 65536
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
> {noformat}
> This should be displayed on daemon startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



[jira] [Created] (HADOOP-9506) Backport HADOOP-8329 to branch-1.0 (Build fails with Java 7)

2013-04-25 Thread JIRA
Роман Донченко created HADOOP-9506:
--

 Summary: Backport HADOOP-8329 to branch-1.0 (Build fails with Java 
7)
 Key: HADOOP-9506
 URL: https://issues.apache.org/jira/browse/HADOOP-9506
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.4
Reporter: Роман Донченко
Priority: Minor


Please backport HADOOP-8329 to branch-1.0. The change doesn't affect behavior, 
and it would be nice to be able to build the stable version with Java 7.

hadoop-8329-b1.txt from the original issue applies with no changes.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9706) Provide Hadoop Karaf support

2013-07-07 Thread JIRA
Jean-Baptiste Onofré created HADOOP-9706:


 Summary: Provide Hadoop Karaf support
 Key: HADOOP-9706
 URL: https://issues.apache.org/jira/browse/HADOOP-9706
 Project: Hadoop Common
  Issue Type: Task
  Components: tools
Reporter: Jean-Baptiste Onofré
 Fix For: 3.0.0
 Attachments: HADOOP-9706.patch

To follow the discussion about OSGi, and in order to move forward, I propose 
the following hadoop-karaf bundle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9743) TestStaticMapping test fails

2013-07-18 Thread JIRA
Jean-Baptiste Onofré created HADOOP-9743:


 Summary: TestStaticMapping test fails
 Key: HADOOP-9743
 URL: https://issues.apache.org/jira/browse/HADOOP-9743
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Jean-Baptiste Onofré


  - testCachingRelaysResolveQueries(org.apache.hadoop.net.TestStaticMapping): 
Expected two entries in the map Mapping: cached switch mapping relaying to 
static mapping with single switch = false(..)
  - testCachingCachesNegativeEntries(org.apache.hadoop.net.TestStaticMapping): 
Expected two entries in the map Mapping: cached switch mapping relaying to 
static mapping with single switch = false(..)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9744) TestNetUtils test fails

2013-07-18 Thread JIRA
Jean-Baptiste Onofré created HADOOP-9744:


 Summary: TestNetUtils test fails
 Key: HADOOP-9744
 URL: https://issues.apache.org/jira/browse/HADOOP-9744
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Jean-Baptiste Onofré


- testNormalizeHostName(org.apache.hadoop.net.TestNetUtils): 
expected:<[67.215.65.132]> but was:<[UnknownHost123]>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9742) TestTableMapping test fails

2013-07-18 Thread JIRA
Jean-Baptiste Onofré created HADOOP-9742:


 Summary: TestTableMapping test fails
 Key: HADOOP-9742
 URL: https://issues.apache.org/jira/browse/HADOOP-9742
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Jean-Baptiste Onofré


  - testResolve(org.apache.hadoop.net.TestTableMapping): expected: 
but was:
  - testTableCaching(org.apache.hadoop.net.TestTableMapping): 
expected: but was:
  - testClearingCachedMappings(org.apache.hadoop.net.TestTableMapping): 
expected: but was:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9745) TestZKFailoverController test fails

2013-07-18 Thread JIRA
Jean-Baptiste Onofré created HADOOP-9745:


 Summary: TestZKFailoverController test fails
 Key: HADOOP-9745
 URL: https://issues.apache.org/jira/browse/HADOOP-9745
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Jean-Baptiste Onofré


  - 
testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
 Did not fail to graceful failover when target failed to become active!
  - 
testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
 expected:<1> but was:<0>
  - 
testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
 Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9745) TestZKFailoverController test fails

2013-07-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré resolved HADOOP-9745.
--

Resolution: Fixed

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9744) TestNetUtils test fails

2013-07-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré resolved HADOOP-9744.
--

Resolution: Fixed

> TestNetUtils test fails
> ---
>
> Key: HADOOP-9744
> URL: https://issues.apache.org/jira/browse/HADOOP-9744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
> - testNormalizeHostName(org.apache.hadoop.net.TestNetUtils): 
> expected:<[67.215.65.132]> but was:<[UnknownHost123]>

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9743) TestStaticMapping test fails

2013-07-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré resolved HADOOP-9743.
--

Resolution: Fixed

> TestStaticMapping test fails
> 
>
> Key: HADOOP-9743
> URL: https://issues.apache.org/jira/browse/HADOOP-9743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - testCachingRelaysResolveQueries(org.apache.hadoop.net.TestStaticMapping): 
> Expected two entries in the map Mapping: cached switch mapping relaying to 
> static mapping with single switch = false(..)
>   - 
> testCachingCachesNegativeEntries(org.apache.hadoop.net.TestStaticMapping): 
> Expected two entries in the map Mapping: cached switch mapping relaying to 
> static mapping with single switch = false(..)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9742) TestTableMapping test fails

2013-07-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré resolved HADOOP-9742.
--

Resolution: Fixed

> TestTableMapping test fails
> ---
>
> Key: HADOOP-9742
> URL: https://issues.apache.org/jira/browse/HADOOP-9742
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - testResolve(org.apache.hadoop.net.TestTableMapping): expected: 
> but was:
>   - testTableCaching(org.apache.hadoop.net.TestTableMapping): 
> expected: but was:
>   - testClearingCachedMappings(org.apache.hadoop.net.TestTableMapping): 
> expected: but was:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9882) Trunk doesn't compile

2013-08-17 Thread JIRA
Jean-Baptiste Onofré created HADOOP-9882:


 Summary: Trunk doesn't compile
 Key: HADOOP-9882
 URL: https://issues.apache.org/jira/browse/HADOOP-9882
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Jean-Baptiste Onofré


Currently, trunk does not compile (in hadoop-common-project/hadoop-common 
module):

[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) 
on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
protoc version is 'libprotoc 2.4.1', expected version is '2.5.0' -> [Help 1]

I gonna fix that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9903) hadoop version throws ClassNotFoundException

2013-08-26 Thread JIRA
André Kelpe created HADOOP-9903:
---

 Summary: hadoop version throws ClassNotFoundException
 Key: HADOOP-9903
 URL: https://issues.apache.org/jira/browse/HADOOP-9903
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.1.0-beta
Reporter: André Kelpe


I downloaded the new hadoop 2.1.0 beta, tried to run hadoop version, and I got 
this:

$ $HADOOP_HOME/bin/hadoop version
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/util/VersionInfo
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.VersionInfo
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.hadoop.util.VersionInfo.  Program 
will exit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9904) discoverability of release tarballs

2013-08-26 Thread JIRA
André Kelpe created HADOOP-9904:
---

 Summary: discoverability of release tarballs
 Key: HADOOP-9904
 URL: https://issues.apache.org/jira/browse/HADOOP-9904
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: André Kelpe


As part of the cascading (http://cascading.org) project, we are maintaining a 
vagrant based hadoop setup 
(https://github.com/Cascading/vagrant-cascading-hadoop-cluster). This setup 
downloads the hadoop tarball from a nearby mirror, when the user starts it up. 
The problem we are having, is that there is no easy way for a script to 
determine the file name of the current stable release tarball to download.

When 1.2.1 became the new stable release the former stable release 1.1.2 was 
removed from the mirrors. This broke our setup, while that is not really 
necessary, if it would be discoverable, what the latest tarball is.

There is a /stable directory, which contains the latest release, however the 
file names are changing every time. If there would be link in the directory, 
that is called hadoop-stable.tar.gz or a simple text file, that explains, what 
the latest stable release is, our setup would continue working, even if a new 
version or hadoop is released.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9903) hadoop version throws ClassNotFoundException

2013-08-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Kelpe resolved HADOOP-9903.
-

Resolution: Invalid

> hadoop version throws ClassNotFoundException
> 
>
> Key: HADOOP-9903
> URL: https://issues.apache.org/jira/browse/HADOOP-9903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>
> I downloaded the new hadoop 2.1.0 beta, tried to run hadoop version, and I 
> got this:
> $ $HADOOP_HOME/bin/hadoop version
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/util/VersionInfo
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.util.VersionInfo
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: org.apache.hadoop.util.VersionInfo.  Program 
> will exit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9910) proxy server start and stop documentation wrong

2013-08-28 Thread JIRA
André Kelpe created HADOOP-9910:
---

 Summary: proxy server start and stop documentation wrong
 Key: HADOOP-9910
 URL: https://issues.apache.org/jira/browse/HADOOP-9910
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe


I was trying to run a distributed cluster and found two little problems in the 
documentation on how to start and stop the proxy server. Attached patch fixes 
it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9911) hadoop 2.1.0-beta tarball only contains 32bit native libraries

2013-08-28 Thread JIRA
André Kelpe created HADOOP-9911:
---

 Summary: hadoop 2.1.0-beta tarball only contains 32bit native 
libraries
 Key: HADOOP-9911
 URL: https://issues.apache.org/jira/browse/HADOOP-9911
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe


I am setting up a cluster on 64bit linux and I noticed, that the tarball only 
ships with 32 bit libraries:

$ pwd
/opt/hadoop-2.1.0-beta/lib/native


$ ls -al
total 2376
drwxr-xr-x 2 67974 users   4096 Aug 15 20:59 .
drwxr-xr-x 3 67974 users   4096 Aug 15 20:59 ..
-rw-r--r-- 1 67974 users 598578 Aug 15 20:59 libhadoop.a
-rw-r--r-- 1 67974 users 764772 Aug 15 20:59 libhadooppipes.a
lrwxrwxrwx 1 67974 users 18 Aug 15 20:59 libhadoop.so -> libhadoop.so.1.0.0
-rwxr-xr-x 1 67974 users 407568 Aug 15 20:59 libhadoop.so.1.0.0
-rw-r--r-- 1 67974 users 304632 Aug 15 20:59 libhadooputils.a
-rw-r--r-- 1 67974 users 184414 Aug 15 20:59 libhdfs.a
lrwxrwxrwx 1 67974 users 16 Aug 15 20:59 libhdfs.so -> libhdfs.so.0.0.0
-rwxr-xr-x 1 67974 users 149556 Aug 15 20:59 libhdfs.so.0.0.0


$ file *
libhadoop.a:current ar archive
libhadooppipes.a:   current ar archive
libhadoop.so:   symbolic link to `libhadoop.so.1.0.0'
libhadoop.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 
(SYSV), dynamically linked, 
BuildID[sha1]=0x527e88ec3e92a95389839bd3fc9d7dbdebf654d6, not stripped
libhadooputils.a:   current ar archive
libhdfs.a:  current ar archive
libhdfs.so: symbolic link to `libhdfs.so.0.0.0'
libhdfs.so.0.0.0:   ELF 32-bit LSB shared object, Intel 80386, version 1 
(SYSV), dynamically linked, 
BuildID[sha1]=0xddb2abae9272f584edbe22c76b43fcda9436f685, not stripped

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9914) nodes overview should use FQDNs

2013-08-28 Thread JIRA
André Kelpe created HADOOP-9914:
---

 Summary: nodes overview should use FQDNs
 Key: HADOOP-9914
 URL: https://issues.apache.org/jira/browse/HADOOP-9914
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe


I am running a hadoop cluster in a bunch of VMs on my local machine and I am 
using avahi/zeroconf to do local name resolution (this is to avoid having to 
fiddle with my /etc/hosts file). 

The resourcemanager has an overview page, with links to all the nodemanager 
web-interfaces. The links do not work with zeroconf, due to the fact that the 
links are not including the domain part. zeroconf domains look like this 
"hadoop1.local", but the web-interface uses "hadoop1", which will not resolve.

In hadoop 1.x all web-interfaces were using FQDN, meaning using avahi/zeroconf 
for name resolution was no problem. The same should be possible in hadoop 2.x.

I am still beginning to work with hadoop 2.x, so there might be other parts, 
having the same problem, but I am not yet aware of any. If I find more of 
these, I will update this bug.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9914) nodes overview should use FQDNs

2013-08-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Kelpe resolved HADOOP-9914.
-

Resolution: Invalid

> nodes overview should use FQDNs
> ---
>
> Key: HADOOP-9914
> URL: https://issues.apache.org/jira/browse/HADOOP-9914
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>
> I am running a hadoop cluster in a bunch of VMs on my local machine and I am 
> using avahi/zeroconf to do local name resolution (this is to avoid having to 
> fiddle with my /etc/hosts file). 
> The resourcemanager has an overview page, with links to all the nodemanager 
> web-interfaces. The links do not work with zeroconf, due to the fact that the 
> links are not including the domain part. zeroconf domains look like this 
> "hadoop1.local", but the web-interface uses "hadoop1", which will not resolve.
> In hadoop 1.x all web-interfaces were using FQDN, meaning using 
> avahi/zeroconf for name resolution was no problem. The same should be 
> possible in hadoop 2.x.
> I am still beginning to work with hadoop 2.x, so there might be other parts, 
> having the same problem, but I am not yet aware of any. If I find more of 
> these, I will update this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread JIRA
André Kelpe created HADOOP-9917:
---

 Summary: cryptic warning when killing a job running with yarn
 Key: HADOOP-9917
 URL: https://issues.apache.org/jira/browse/HADOOP-9917
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
Priority: Minor


When I am killing a job like this

hadoop job -kill 

I am getting a cryptic warning, which I don't really understand:

DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

I fail parsing this and I believe many others will do too. Please make this 
warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9923) yarn staging area on hdfs has wrong permission and is created by the wrong user

2013-08-30 Thread JIRA
André Kelpe created HADOOP-9923:
---

 Summary: yarn staging area on hdfs has wrong permission and is 
created by the wrong user
 Key: HADOOP-9923
 URL: https://issues.apache.org/jira/browse/HADOOP-9923
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe


I am setting up a cluster with hadoop 2.1-beta that consists of the following 
components:

master: runs the namenode, the resourcemanager and the job history server.
hadoop1, hadoop2, hadoop3: run datanodes and node managers

I created 3 system users for the different components, like explained in the 
docs:

hdfs: runs all things hdfs
yarn: runs all things yarn
mapred: runs the job history server

If I now boot up the cluster, I cannot submit jobs since the yarn staging area 
permissions do not allow it.

What I found out is, that the job-history-server is creating the staging 
directory while starting. This first of all causes it to be owned by the wrong 
user (mapred) and having the wrong permision (770). The docs are not really 
clear if I am supposed to start hdfs first, then create the staging area by 
hand and then start the job-history-server or if this is supposed to happen 
automatically by itself.

In any case, either the docs should be updated or the job-history-server should 
not create the directory.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9928) provide md5, sha1 and .asc files, that are usable

2013-09-03 Thread JIRA
André Kelpe created HADOOP-9928:
---

 Summary: provide md5, sha1 and .asc files, that are usable
 Key: HADOOP-9928
 URL: https://issues.apache.org/jira/browse/HADOOP-9928
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.1, 2.1.0-beta
Reporter: André Kelpe
Priority: Critical


I am trying to verify the checksums of tarballs I downloaded and it seems that 
the way, those are produced is all but useful. 

Almost all other open source projects I know, create a .md5, .sha1 and .asc 
files, that can easily be used with tools like md5sum, sha1sum or gpg. 

The hadoop downloads provide an mds file, for which there seems to be no 
documentation on how to use it.

Here are some challenges with that format:

0. all sorts of checksums are in the same file
1. The MD5 sum is all upper case (all of them are, to be precise)
2. The MD5 sum contains whitespace

For the three above I came up with this interesting construct:

md5sum --check  <(grep "MD5 = " some-file.mds | sed -e "s/MD5 = //g;s/ //g" | 
awk -F: '{print tolower($2), "", $1}')

That would work, if there wouldn't be the next problem:

3. The file format wraps lines around after 80 chars (see here for instance: 
http://apache.openmirror.de/hadoop/core/beta/hadoop-2.1.0-beta-src.tar.gz.mds)

I really do not see, how this format is useful to anyone.

5. Next to all of that, there are not gpg signatures. How can I verify that the 
mirror, I downloaded the tarball from, was not compromised?

It would be very helpful, if you could provide checksums and signatures the 
same way, that other projects use or at least explain how to use the mds files 
with standard unix tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9935) set junit dependency to test scope

2013-09-05 Thread JIRA
André Kelpe created HADOOP-9935:
---

 Summary: set junit dependency to test scope
 Key: HADOOP-9935
 URL: https://issues.apache.org/jira/browse/HADOOP-9935
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: André Kelpe
 Attachments: HADOOP-9935.patch

junit should be set to scope test in hadoop-mapreduce-project and 
hadoop-yarn-project. This patch will fix the problem, that hadoop always pulls 
in its own version of junit and that junit is even included in the tarballs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-10122) change yarn minicluster base directory via system property

2013-11-21 Thread JIRA
André Kelpe created HADOOP-10122:


 Summary: change yarn minicluster base directory via system property
 Key: HADOOP-10122
 URL: https://issues.apache.org/jira/browse/HADOOP-10122
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 2.2.0
Reporter: André Kelpe
Priority: Minor
 Attachments: HADOOP-10122.patch

The yarn minicluster used for testing uses the "target" directory by default. 
We use gradle for building our projects and we would like to see it using a 
different directory. This patch makes it possible to use a different directory 
by setting the yarn.minicluster.directory system property.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-17549) Allow to set s3 object metadata

2021-02-25 Thread Jira
Timothée Peignier created HADOOP-17549:
--

 Summary: Allow to set s3 object metadata
 Key: HADOOP-17549
 URL: https://issues.apache.org/jira/browse/HADOOP-17549
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, fs/s3
Reporter: Timothée Peignier


It's currently impossible to set custom S3 Object Metadata such as 
`ContentType`, `ContentEncoding`, `ContentDisposition`, and a few others.

Being able to do so would greatly increase the usefulness of S3 storage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17599) Remove NULL checks before instanceof

2021-03-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17599.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Remove NULL checks before instanceof
> 
>
> Key: HADOOP-17599
> URL: https://issues.apache.org/jira/browse/HADOOP-17599
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Jiajun Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17599.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The NULL checks before instanceof check should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17675) LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException

2021-05-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

István Fajth resolved HADOOP-17675.
---
Resolution: Fixed

> LdapGroupsMapping$LdapSslSocketFactory ClassNotFoundException
> -
>
> Key: HADOOP-17675
> URL: https://issues.apache.org/jira/browse/HADOOP-17675
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.2
>Reporter: Tamas Mate
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: stacktrace.txt
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Using LdapGroupsMapping with SSL enabled causes ClassNotFoundException when 
> it is called through native threads, such as Apache Impala does.
> When a thread is attached to the VM, the currentThread's context classloader 
> is null, so when jndi internally tries to use the current thread's context 
> classloader to load the socket factory implementation, the 
> Class.forname(String, boolean, ClassLoader) method gets a null as the loader  
> uses the bootstrap classloader.
>  Meanwhile the LdapGroupsMapping class and the SslSocketFactory defined in it 
> is loaded by the application classloader from its classpath.
> As the bootstrap classloader does not have hadoop-common in its classpath, 
> when a native thread tries to use/load the LdapGroupsMapping class it can't 
> because the bootstrap loader can't load anything from hadoop-common. The 
> correct solution seems to be to set the currentThread's context classloader 
> to the classloader of LdapGroupsMapping class before initializing the jndi 
> internals, and then reset to the original value after, with that we can 
> ensure that the behaviour of other things does not change, but this failure 
> can be avoided as well.
> Attached the complete stacktrace to this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17678) Dockerfile for building on Centos 7

2021-05-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17678.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Dockerfile for building on Centos 7
> ---
>
> Key: HADOOP-17678
> URL: https://issues.apache.org/jira/browse/HADOOP-17678
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Need to add a Dockerfile for building on Centos 7 since some folks in the 
> community are using it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17693) Dockerfile for building on Centos 8

2021-05-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17693.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Dockerfile for building on Centos 8
> ---
>
> Key: HADOOP-17693
> URL: https://issues.apache.org/jira/browse/HADOOP-17693
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
> Environment: Centos 8
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Need to add a Dockerfile for building on Centos 8 since some folks in the 
> community are using it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17724) Add Dockerfile for Debian 10

2021-06-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17724.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Add Dockerfile for Debian 10
> 
>
> Key: HADOOP-17724
> URL: https://issues.apache.org/jira/browse/HADOOP-17724
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: build-log-debian-10.zip
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Adding a Dockerfile for building on Debian 10 since there are a lot of users 
> in the community using this distro.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17766) CI for Debian 10

2021-06-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17766.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> CI for Debian 10
> 
>
> Key: HADOOP-17766
> URL: https://issues.apache.org/jira/browse/HADOOP-17766
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Need to setup CI for Debian 10. We need to also ensure it runs only if there 
> are any changes to C++ files. Running it for all the PRs would be redundant.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17778) CI for Centos 8

2021-06-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17778.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> CI for Centos 8
> ---
>
> Key: HADOOP-17778
> URL: https://issues.apache.org/jira/browse/HADOOP-17778
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
> Environment: Centos 8
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Need to run CI for Centos 8 platform to ensure that further changes are 
> stable on this platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17807) Use separate source dir for platform builds

2021-07-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17807.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Use separate source dir for platform builds
> ---
>
> Key: HADOOP-17807
> URL: https://issues.apache.org/jira/browse/HADOOP-17807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2021-07-16-11-36-26-698.png, 
> image-2021-07-16-11-36-55-495.png, image-2021-07-16-11-37-56-923.png
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The multi-platform build stages run on the checkout of the source directory, 
> one after the other. For those platforms that are marked as optional (Centos 
> 8 and Debian 10 currently), the condition to run CI on the platform is 
> determined by inspecting the git commit history and checking if there's any 
> C++ file/C++ build/platform related changes.
> It seems like after YETUS runs on one platform, it's clearing up the git 
> branch information. This is causing the build to not get triggered on the 
> optional platforms. Please note that those platforms not marked optional 
> (Ubuntu focal) isn't affected by this since CI runs for this platform 
> irrespective of any C++ changes.
> We can see this in the Jenkins UI page -
> CI runs for Centos 8 -
>  !image-2021-07-16-11-36-26-698.png! 
> Subsequently, the CI for Debian 10 gets skipped -
>  !image-2021-07-16-11-36-55-495.png! 
> However, CI for Ubuntu focal runs since it's not marked as optional -
>  !image-2021-07-16-11-37-56-923.png! 
> Thus, we need to ensure that each platform builds on its own copy of the 
> source code checkout so that whatever changes one platform makes doesn't 
> affect the other.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17815) Run CI for Centos 7

2021-07-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17815.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Run CI for Centos 7
> ---
>
> Key: HADOOP-17815
> URL: https://issues.apache.org/jira/browse/HADOOP-17815
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: build-log-centos-7.zip
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Need to run the CI for Centos 7 platform since it's a supported platform. The 
> CI will run on this platform only when there's C++ file/C++ build/platform 
> related changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-08-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17787.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2021-07-03-10-47-02-330.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17854) Run junit in Jenkins only if surefire reports exist

2021-08-24 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17854.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Run junit in Jenkins only if surefire reports exist
> ---
>
> Key: HADOOP-17854
> URL: https://issues.apache.org/jira/browse/HADOOP-17854
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2021-08-18-08-59-14-022.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to check if some xml files exist under surefire-reports before 
> running junit in Jenkins. 
>  !image-2021-08-18-08-59-14-022.png|thumbnail! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17913) Filter deps with release labels

2021-09-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17913.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Filter deps with release labels
> ---
>
> Key: HADOOP-17913
> URL: https://issues.apache.org/jira/browse/HADOOP-17913
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to add the ability to filter the dependencies listed in packages.json 
> file based on the specified release label. This is helpful for maintaining 
> dependencies across different releases for a platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17902) Fix Hadoop build on Debian 10

2021-09-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17902.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix Hadoop build on Debian 10
> -
>
> Key: HADOOP-17902
> URL: https://issues.apache.org/jira/browse/HADOOP-17902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> We're using *Debian testing* as one of the package sources to get the latest 
> packages. It seems to be broken at the moment. The CI fails to create the 
> build environment for the Debian 10 platform -
> {code}
> [2021-09-08T00:21:11.596Z] #13 [ 8/14] RUN apt-get -q update && apt-get 
> -q install -y --no-install-recommends python3 && apt-get -q install -y 
> --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get 
> clean && rm -rf /var/lib/apt/lists/*
> ...
> [2021-09-08T00:21:22.744Z] #13 11.28 Preparing to unpack 
> .../libc6_2.31-17_amd64.deb ...
> [2021-09-08T00:21:23.260Z] #13 11.46 Checking for services that may need to 
> be restarted...
> [2021-09-08T00:21:23.260Z] #13 11.48 Checking init scripts...
> [2021-09-08T00:21:23.260Z] #13 11.50 Unpacking libc6:amd64 (2.31-17) over 
> (2.28-10) ...
> [2021-09-08T00:21:26.290Z] #13 14.38 Setting up libc6:amd64 (2.31-17) ...
> [2021-09-08T00:21:26.290Z] #13 14.42 /usr/bin/perl: error while loading 
> shared libraries: libcrypt.so.1: cannot open shared object file: No such file 
> or directory
> [2021-09-08T00:21:26.290Z] #13 14.42 dpkg: error processing package 
> libc6:amd64 (--configure):
> [2021-09-08T00:21:26.290Z] #13 14.42  installed libc6:amd64 package 
> post-installation script subprocess returned error exit status 127
> [2021-09-08T00:21:26.291Z] #13 14.43 Errors were encountered while processing:
> [2021-09-08T00:21:26.291Z] #13 14.43  libc6:amd64
> [2021-09-08T00:21:26.291Z] #13 14.46 E: Sub-process /usr/bin/dpkg returned an 
> error code (1)
> [2021-09-08T00:21:27.867Z] #13 ERROR: executor failed running [/bin/bash -o 
> pipefail -c apt-get -q update && apt-get -q install -y 
> --no-install-recommends python3 && apt-get -q install -y 
> --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get 
> clean && rm -rf /var/lib/apt/lists/*]: exit code: 100
> [2021-09-08T00:21:27.867Z] --
> [2021-09-08T00:21:27.867Z]  > [ 8/14] RUN apt-get -q update && apt-get -q 
> install -y --no-install-recommends python3 && apt-get -q install -y 
> --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get 
> clean && rm -rf /var/lib/apt/lists/*:
> [2021-09-08T00:21:27.867Z] --
> [2021-09-08T00:21:27.867Z] executor failed running [/bin/bash -o pipefail -c 
> apt-get -q update && apt-get -q install -y --no-install-recommends 
> python3 && apt-get -q install -y --no-install-recommends 
> $(pkg-resolver/resolve.py debian:10) && apt-get clean && rm -rf 
> /var/lib/apt/lists/*]: exit code: 100
> [2021-09-08T00:21:27.867Z] ERROR: Docker failed to build 
> yetus/hadoop:ef5dbc7283a.
> [2021-09-08T00:21:27.867Z] 
> {code}
> The above log lines are copied from - 
> https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3388/3/pipeline



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17931) Fix typos in usage message in winutils.exe

2021-09-22 Thread Jira
Íñigo Goiri created HADOOP-17931:


 Summary: Fix typos in usage message in winutils.exe
 Key: HADOOP-17931
 URL: https://issues.apache.org/jira/browse/HADOOP-17931
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Íñigo Goiri


The usage message for task creation in winutils.exe has a few typos:
* OPTOINS
* cup rate



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17931) Fix typos in usage message in winutils.exe

2021-09-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17931.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix typos in usage message in winutils.exe
> --
>
> Key: HADOOP-17931
> URL: https://issues.apache.org/jira/browse/HADOOP-17931
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Gautham Banasandra
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The usage message for task creation in winutils.exe has a few typos:
> * OPTOINS
> * cup rate



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17975) Fallback to simple auth does not work for a secondary DistributedFileSystem instance

2021-10-22 Thread Jira
István Fajth created HADOOP-17975:
-

 Summary: Fallback to simple auth does not work for a secondary 
DistributedFileSystem instance
 Key: HADOOP-17975
 URL: https://issues.apache.org/jira/browse/HADOOP-17975
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: István Fajth
Assignee: István Fajth


The following code snippet demonstrates what is necessary to cause a failure in 
connection to a non secure cluster with fallback to SIMPLE auth allowed from a 
secure cluster.
{code:java}
    Configuration conf = new Configuration();

    conf.setBoolean("ipc.client.fallback-to-simple-auth-allowed", true);
    URI fsUri = new URI("hdfs://");

    conf.setBoolean("fs.hdfs.impl.disable.cache", true);
    FileSystem fs = FileSystem.get(fsUri, conf);
    FSDataInputStream src = fs.open(new Path("/path/to/a/file"));
    FileOutputStream dst = new FileOutputStream(File.createTempFile("foo", 
"bar"));
    IOUtils.copyBytes(src, dst, 1024);

// The issue happens even if we re-enable cache at this point
    //conf.setBoolean("fs.hdfs.impl.disable.cache", false);
// The issue does not happen when we close the first FileSystem object
// before creating the second.
//fs.close();
    FileSystem fs2 = FileSystem.get(fsUri, conf);
    FSDataInputStream src2 = fs2.open(new Path("/path/to/a/file"));
    FileOutputStream dst2 = new FileOutputStream(File.createTempFile("foo", 
"bar"));
    IOUtils.copyBytes(src2, dst2, 1024);
{code}


The problem is that when the DfsClient is created it creates an instance of 
AtomicBoolean, which is propagated down into the IPC layer, where the 
Client.Connection instance in setupIOStreams sets its value. This connection 
object is cached and re-used to multiplex requests against the same DataNode.

In case of creating a second DfsClient, the AtomicBoolean reference in the 
client is a new AtomicBoolean, but the Client.Connection instance is the same, 
and as it has a socket already open to the DataNode, it returns immediatelly 
from setupIOStreams, leaving the fallbackToSimpleAuth AtomicBoolean false as it 
is created in the DfsClient.
This AtomicBoolean on the other hand controls how the SaslDataTransferClient 
handles the connection in the above level, and with this value left on the 
default false, the SaslDataTransferClient of the second DfsClient will not fall 
back to SIMPLE authentication but will try to send a SASL handshake when 
connecting to the DataNode.
 
The access to the FileSystem via the second DfsClient fails with the following 
exception:
{code}
WARN hdfs.DFSClient: Failed to connect to /: for file  
for block BP-531773307--1634685133591:blk_1073741826_1002, add to 
deadNodes and continue. 
java.io.EOFException: Unexpected EOF while trying to read response from server
at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:552)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:215)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:455)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:393)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:267)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:215)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
at 
org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:648)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2980)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:822)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:747)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:380)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:658)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:589)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:771)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
at DfsClientTest3.main(DfsClientTest3.java:30)
{code}
 
The DataNode in the meantime logs the foll

[jira] [Resolved] (HADOOP-17978) Exclude ASF license check for pkg-resolver JSON

2021-10-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17978.
--
Fix Version/s: 2.10.2
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Exclude ASF license check for pkg-resolver JSON
> ---
>
> Key: HADOOP-17978
> URL: https://issues.apache.org/jira/browse/HADOOP-17978
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There's no way to add comments to a JSON file. Need to exclude the following 
> files from ASF license checks since they're JSON files -
> 1. dev-support/docker/pkg-resolver/packages.json
> 2. dev-support/docker/pkg-resolver/platforms.json



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18029) Update CompressionCodecFactory to handle uppercase file extensions

2021-12-01 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-18029.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update CompressionCodecFactory to handle uppercase file extensions
> --
>
> Key: HADOOP-18029
> URL: https://issues.apache.org/jira/browse/HADOOP-18029
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, io, test
> Environment: Tested locally on macOS 11.6.1, IntelliJ IDEA 2021.2.3, 
> running maven commands through terminal. Forked from trunk branch on November 
> 29th, 2021.
>Reporter: Desmond Sisson
>Assignee: Desmond Sisson
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> I've updated the CompressionCodecFactory to be able to handle filenames with 
> capitalized compression extensions. Two of the three maps internal to the 
> class which are used to store codecs have existing lowercase casts, but it is 
> absent from the call inside getCodec() used for comparing path names.
> I updated the corresponding unit test in TestCodecFactory to include intended 
> use cases, and confirmed the test passes with the change. I also updated the 
> error message in the case of a null from an NPE to a rich error message. I've 
> resolved all checkstyle violations within the changed files.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18054) Unable to load AWS credentials from any provider in the chain

2021-12-21 Thread Jira
Esteban Avendaño created HADOOP-18054:
-

 Summary: Unable to load AWS credentials from any provider in the 
chain
 Key: HADOOP-18054
 URL: https://issues.apache.org/jira/browse/HADOOP-18054
 Project: Hadoop Common
  Issue Type: Bug
  Components: auth, fs, fs/s3, security
Affects Versions: 3.3.1
 Environment: From top to down.

Kubernetes version 1.18.20

Spark Version: 2.4.4

Kubernetes Setup: Pod with serviceAccountName that binds with IAM Role using 
IRSA (EKS Feature).
{code:java}
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: 
arn:aws:iam:::role/EKSDefaultPolicyFor-Spark
  name: spark
  namespace: spark {code}
AWS Setup:

IAM Role with permissions over the S3 Bucket

Bucket with permissions granted over the IAM Role.

Code:
{code:java}
def run_etl():
sc = 
SparkSession.builder.appName("TXD-PYSPARK-ORACLE-SIEBEL-CASOS").getOrCreate()
sqlContext = SQLContext(sc)
args = sys.argv
load_date = args[1]  # Ej: "2019-05-21"
output_path = args[2]  # Ej: s3://mybucket/myfolder

print(args, "load_date", load_date, "output_path", output_path)
sc._jsc.hadoopConfiguration().set(
"fs.s3a.aws.credentials.provider",
"com.amazonaws.auth.DefaultAWSCredentialsProviderChain"
)
sc._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", 
"true")
sc._jsc.hadoopConfiguration().set("fs.s3a.impl", 
"org.apache.hadoop.fs.s3a.S3AFileSystem")
# sc._jsc.hadoopConfiguration().set("fs.s3.impl", 
"org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sc._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.s3a.impl", 
"org.apache.hadoop.fs.s3a.S3A")

session = boto3.session.Session()
client = session.client(service_name='secretsmanager', 
region_name="us-east-1")
get_secret_value_response = client.get_secret_value(
SecretId="Siebel_Connection_Info"
)
secret = get_secret_value_response["SecretString"]
secret = json.loads(secret)

db_username = secret.get("db_username")
db_password = secret.get("db_password")
db_host = secret.get("db_host")
db_port = secret.get("db_port")
db_name = secret.get("db_name")
db_url = "jdbc:oracle:thin:@{}:{}/{}".format(db_host, db_port, db_name)
jdbc_driver_name = "oracle.jdbc.OracleDriver"

dbtable = """(SELECT * FROM SIEBEL.REPORTE_DE_CASOS WHERE JOB_ID IN (SELECT 
JOB_ID FROM SIEBEL.SERVICE_CONSUMED_STATUS WHERE PUBLISH_INFORMATION_DT BETWEEN 
TO_DATE('{} 00:00:00', '-MM-DD HH24:MI:SS') AND TO_DATE('{} 23:59:59', 
'-MM-DD HH24:MI:SS')))""".format(load_date, load_date)

df = sqlContext.read\
  .format("jdbc")\
  .option("charset", "utf8")\
  .option("driver", jdbc_driver_name)\
  .option("url",db_url)\
  .option("dbtable", dbtable)\
  .option("user", db_username)\
  .option("password", db_password)\
  .option("oracle.jdbc.timezoneAsRegion", "false")\
  .load()

# Particionado
a_load_date = load_date.split('-')
df = df.withColumn("year", lit(a_load_date[0]))
df = df.withColumn("month", lit(a_load_date[1]))
df = df.withColumn("day", lit(a_load_date[2]))
df.write.mode("append").partitionBy(["year", "month", 
"day"]).csv(output_path, header=True)

# Es importante cerrar la conexion para evitar problemas como el reportado 
en
# 
https://stackoverflow.com/questions/40830638/cannot-load-main-class-from-jar-file
sc.stop()


if __name__ == '__main__':
run_etl() {code}
Log's
{code:java}
+ '[' -z s3://mybucket.spark.jobs/siebel-casos-actividades ']'
+ aws s3 cp s3://mybucket.spark.jobs/siebel-casos-actividades /opt/ --recursive 
--include '*'
download: 
s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-casos.py 
to ../../txd-pyspark-siebel-casos.py
download: 
s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-actividades.py
 to ../../txd-pyspark-siebel-actividades.py
download: s3://mybucket.jobs/siebel-casos-actividades/hadoop-aws-3.3.1.jar to 
../../hadoop-aws-3.3.1.jar
download: s3://mybucket.spark.jobs/siebel-casos-actividades/ojdbc8.jar to 
../../ojdbc8.jar
download: 
s3://mybucket.spark.jobs/siebel-casos-actividades/aws-java-sdk-bundle-1.11.901.jar
 to ../../aws-java-sdk-bundle-1.11.901.jar
++ id -u

[jira] [Created] (HADOOP-18066) AbstractJavaKeyStoreProvider: need a way to read credential store password from Configuration

2022-01-05 Thread Jira
László Bodor created HADOOP-18066:
-

 Summary: AbstractJavaKeyStoreProvider: need a way to read 
credential store password from Configuration
 Key: HADOOP-18066
 URL: https://issues.apache.org/jira/browse/HADOOP-18066
 Project: Hadoop Common
  Issue Type: Wish
Reporter: László Bodor






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18066) AbstractJavaKeyStoreProvider: need a way to read credential store password from Configuration

2022-01-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor resolved HADOOP-18066.
---
Resolution: Invalid

> AbstractJavaKeyStoreProvider: need a way to read credential store password 
> from Configuration
> -
>
> Key: HADOOP-18066
> URL: https://issues.apache.org/jira/browse/HADOOP-18066
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: security
>Reporter: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Codepath in focus is 
> [this|https://github.com/apache/hadoop/blob/c3006be516ce7d4f970e24e7407b401318ceec3c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java#L316]
> {code}
>   password = ProviderUtils.locatePassword(CREDENTIAL_PASSWORD_ENV_VAR,
>   conf.get(CREDENTIAL_PASSWORD_FILE_KEY));
> {code}
> Since HIVE-14822, we can use custom keystore that Hiveserver2 propagates to 
> jobs/tasks of different execution engines (mr, tez, spark).
> We're able to pass any "jceks:" url, but not a password, e.g. on this 
> codepath:
> {code}
> Caused by: java.security.UnrecoverableKeyException: Password verification 
> failed
>   at com.sun.crypto.provider.JceKeyStore.engineLoad(JceKeyStore.java:879) 
> ~[sunjce_provider.jar:1.8.0_232]
>   at java.security.KeyStore.load(KeyStore.java:1445) ~[?:1.8.0_232]
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.locateKeystore(AbstractJavaKeyStoreProvider.java:326)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:86)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2409)
>  ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2347) 
> ~[hadoop-common-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getPasswordString(AbfsConfiguration.java:295)
>  ~[hadoop-azure-3.1.1.7.1.7.0-551.jar:?]
>   at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:525)
>  ~[hadoop-azure-3.1.1.7.1.7.0-551.jar:?]
> {code}
> Even there is a chance of reading a text file, it's not secure, we need to 
> try reading a Configuration property first and if it's null, we can go to the 
> environment variable.
> Hacking the System.getenv() is only possible with reflection, doesn't look so 
> good.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18078) TemporaryAWSCredentialsProvider has no credentials

2022-01-11 Thread Jira
Björn Boschman created HADOOP-18078:
---

 Summary: TemporaryAWSCredentialsProvider has no credentials
 Key: HADOOP-18078
 URL: https://issues.apache.org/jira/browse/HADOOP-18078
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.3.1
 Environment: python:3.9.5
openjdk:11.0.13
spark:3.2.0
hadoop:3.3.1
Reporter: Björn Boschman
 Attachments: spark_test.py

Not quite sure how to phrase this bugreport but I'll give it a try..
We are using a SparkSession to access parquet files on AWS/S3

it is ok, if there is only one  s3a URL supplied
it used to be ok if there is a bunch of s3a URLs - that's broken siince 
hadoop:3.3.1

 

 

I've attached a sample script - yet it relys on spark+hadoop installed 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18078) TemporaryAWSCredentialsProvider has no credentials

2022-01-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Björn Boschman resolved HADOOP-18078.
-
Resolution: Cannot Reproduce

> TemporaryAWSCredentialsProvider has no credentials
> --
>
> Key: HADOOP-18078
> URL: https://issues.apache.org/jira/browse/HADOOP-18078
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: python:3.9.5
> openjdk:11.0.13
> spark:3.2.0
> hadoop:3.3.1
>Reporter: Björn Boschman
>Priority: Major
> Attachments: HADOOP-18078.scala, spark_test.py
>
>
> Not quite sure how to phrase this bugreport but I'll give it a try..
> We are using a SparkSession to access parquet files on AWS/S3
> it is ok, if there is only one  s3a URL supplied
> it used to be ok if there is a bunch of s3a URLs - that's broken siince 
> hadoop:3.3.1
>  
>  
> I've attached a sample script - yet it relys on spark+hadoop installed 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)

2022-01-20 Thread Jira
László Bodor created HADOOP-18086:
-

 Summary: Remove org.checkerframework.dataflow from 
hadoop-shaded-guava artifact (GNU GPLv2 license)
 Key: HADOOP-18086
 URL: https://issues.apache.org/jira/browse/HADOOP-18086
 Project: Hadoop Common
  Issue Type: Wish
Reporter: László Bodor






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18143) toString method of RpcCall is intermittently t

2022-02-24 Thread Jira
András Győri created HADOOP-18143:
-

 Summary: toString method of RpcCall is intermittently t
 Key: HADOOP-18143
 URL: https://issues.apache.org/jira/browse/HADOOP-18143
 Project: Hadoop Common
  Issue Type: Bug
Reporter: András Győri
Assignee: András Győri






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18152) fail to upload logfile to s3 by flume using hadoop-tools

2022-03-04 Thread Jira
王独醉 created HADOOP-18152:


 Summary: fail to upload logfile to s3 by flume using hadoop-tools
 Key: HADOOP-18152
 URL: https://issues.apache.org/jira/browse/HADOOP-18152
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0
 Environment: cdh6.3.2 with hadoop3.0.0 and flume 1.9.0
Reporter: 王独醉
 Attachments: flumelog.txt

fail to upload logfile to s3 by flume using hadoop-tools



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18159) Certificate doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]

2022-03-16 Thread Jira
André F. created HADOOP-18159:
-

 Summary: Certificate doesn't match any of the subject alternative 
names: [*.s3.amazonaws.com, s3.amazonaws.com]
 Key: HADOOP-18159
 URL: https://issues.apache.org/jira/browse/HADOOP-18159
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.3.1
 Environment: hadoop 3.3.1

JDK8
Reporter: André F.


Trying to run any job after bumping our Spark version (which is now using 
Hadoop 3.3.1), lead us to the current exception while reading files on s3:
{code:java}
org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on 
s3a:///.parquet: com.amazonaws.SdkClientException: Unable to 
execute HTTP request: Certificate for  doesn't match 
any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: 
Unable to execute HTTP request: Certificate for  doesn't match any of 
the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com] at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:208) at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3351) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185)
 at org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4277) 
at {code}
 
{code:java}
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for 
 doesn't match any of the subject alternative names: 
[*.s3.amazonaws.com, s3.amazonaws.com]
at 
com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507)
at 
com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437)
at 
com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
at com.amazonaws.http.conn.$Proxy16.connect(Unknown Source)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at 
com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1333)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
  {code}
We found similar problems in the following tickets but:
 - https://issues.apache.org/jira/browse/HADOOP-17017 (we don't use `.` in our 
bucket names)
 - [https://github.com/aws/aws-sdk-java-v2/issues/1786] (we tried to override 
it by using `httpclient:4.5.10` or `httpclient:4.5.8`, with no effect).

We couldn't test it using the native `openssl` configuration due to our setup, 
so we would like to stick with the java ssl implementation, if possible.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18160) `wildfly.openssl` should not be shaded by Hadoop build

2022-03-16 Thread Jira
André F. created HADOOP-18160:
-

 Summary: `wildfly.openssl` should not be shaded by Hadoop build
 Key: HADOOP-18160
 URL: https://issues.apache.org/jira/browse/HADOOP-18160
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.3.1
 Environment: hadoop 3.3.1

spark 3.2.1

JDK8
Reporter: André F.


`org.wildfly.openssl` is a runtime library and its references are being shaded 
on Hadoop, breaking the integration with other frameworks like Spark, whenever 
the "fs.s3a.ssl.channel.mode" is set to "openssl". The error produced in this 
situation is:
{code:java}
Suppressed: java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/org/wildfly/openssl/OpenSSLProvider{code}
Whenever it tries to be instantiated from the `DelegatingSSLSocketFactory`. 
Spark tries to add it to its classpath without the shade, thus creating this 
issue.

Dependencies which are not on "compile" scope should probably not be shaded to 
avoid this kind of integration issues.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18174) IBM Java detected while running on OpenJDK class library

2022-03-25 Thread Jira
Mateusz Łyczek created HADOOP-18174:
---

 Summary: IBM Java detected while running on OpenJDK class library
 Key: HADOOP-18174
 URL: https://issues.apache.org/jira/browse/HADOOP-18174
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.2
Reporter: Mateusz Łyczek


In our project we are using hadoop-client library and everything works fine 
while running inside containers with official OpenJDK base image.

But for optimisation purposes we also use ibm-semeru-runtimes base images 
([https://www.ibm.com/support/pages/semeru-runtimes-release-notes] ). To be 
specific we use *open-17.0.1_12-jre* version of this image and encountered the 
following problem.

Our application is throwing an exception while using hadoop-client to upload 
files:

 
{code:java}
failure to login: javax.security.auth.login.LoginException: No LoginModule 
found for com.ibm.security.auth.module.JAASLoginModule{code}
 

 

After a little investigation I found that this login module is selected by 
Hadoop only when it detects that it is being run on IBM Java (see 
[https://github.com/apache/hadoop/blob/672e380c4f6ffcb0a6fee6d8263166e16b4323c2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L434]
 )

 
{code:java}
  private static String getOSLoginModuleName() {
if (IBM_JAVA) {
  return "com.ibm.security.auth.module.JAASLoginModule";
} else {
  return windows ? "com.sun.security.auth.module.NTLoginModule"
: "com.sun.security.auth.module.UnixLoginModule";
}
  } {code}
 

 

and IBM Java is detected base on *java.vendor* system property value (see 
[https://github.com/apache/hadoop/blob/672e380c4f6ffcb0a6fee6d8263166e16b4323c2/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util/PlatformName.java#L50]
 )

 
{code:java}
  /**
   * The java vendor name used in this platform.
   */
  public static final String JAVA_VENDOR_NAME = 
System.getProperty("java.vendor");

  /**
   * A public static variable to indicate the current java vendor is
   * IBM java or not.
   */
  public static final boolean IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM");  
{code}
 

 

I checked inside the ibm-semeru-runtimes based version of our docker container 
and the *java.vendor* system property is set to the following value:

 
{code:java}
java.vendor: IBM Corporation {code}
but, as the documentation for IBM Semeru runtimes images says, it contains 
OpenJDK class libraries with Eclipse OpenJ9 JVM. I confirmed it by running 
{*}java -version{*}:

 

 
{code:java}
openjdk version "17.0.1" 2021-10-19
IBM Semeru Runtime Open Edition 17.0.1.0 (build 17.0.1+12)
Eclipse OpenJ9 VM 17.0.1.0 (build openj9-0.29.1, JRE 17 Linux amd64-64-Bit 
Compressed References 20211207_75 (JIT enabled, AOT enabled)
OpenJ9   - 7d055dfcb
OMR  - e30892e2b
JCL  - fc67fbe50a0 based on jdk-17.0.1+12) {code}
therefore there is no {{com.ibm.security.auth.module.JAASLoginModule}} class 
present.

 

Therefore I would like to ask if there is any other way of detecting IBM java 
instead of checking the *java.vendor* system property, that would be more 
accurate?

I tried to think about something to suggest - the first thing that came to my 
mind was to  check if one of the classes from IBM packages actually exists by 
trying to load it but I don't know the other usages of the *IBM_JAVA* variable 
in hadoop-client, to be sure that it's a good idea for you.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18274) Use CMake 3.19.0 in Debian 10

2022-06-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-18274.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Use CMake 3.19.0 in Debian 10
> -
>
> Key: HADOOP-18274
> URL: https://issues.apache.org/jira/browse/HADOOP-18274
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HDFS Native Client fails to build on Debian 10 due to the following error -
> {code}
> [WARNING] CMake Error at main/native/libhdfspp/CMakeLists.txt:68 
> (FetchContent_MakeAvailable):
> [WARNING]   Unknown CMake command "FetchContent_MakeAvailable".
> [WARNING] 
> [WARNING] 
> [WARNING] -- Configuring incomplete, errors occurred!
> {code}
> Jenkins run - 
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
> This cause for this issue is that the version of CMake on Debian 10 (which is 
> installed through apt) is 3.13 and *FetchContent_MakeAvailable* was 
> [introduced in CMake 
> 3.14|https://cmake.org/cmake/help/v3.14/module/FetchContent.html] 
> Thus, we upgrade CMake by installing through the 
> [install-cmake.sh|https://github.com/apache/hadoop/blob/34a973a90ef89b633c9b5c13a79aa1ac11c92eb5/dev-support/docker/pkg-resolver/install-cmake.sh]
>  script from pkg-resolver which installs CMake 3.19.0, instead of installing 
> CMake through apt on Debian 10.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18285) S3a should retry when being throttled by STS (assumed roles)

2022-06-10 Thread Jira
André Kelpe created HADOOP-18285:


 Summary: S3a should retry when being throttled by STS (assumed 
roles)
 Key: HADOOP-18285
 URL: https://issues.apache.org/jira/browse/HADOOP-18285
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.3
Reporter: André Kelpe


We ran into an issue where we were being throttled by AWS when reading from a 
bucket using the sts assume-role mechanism.

 

The stacktrace looks like this:

 
{code:java}
Caused by: 
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: 
Rate exceeded (Service: AWSSecurityTokenService; Status Code: 400; Error Code: 
Throttling; Request ID: 02f32511-418c-4b2a-96ef-2d7ba8dafab1; Proxy: null)    
1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
    1654700598727
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)    
1654700598727
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)    
1654700598727
        at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.doInvoke(AWSSecurityTokenServiceClient.java:1682)
    1654700598727
        at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1649)
    1654700598727
        at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1638)
    1654700598727
        at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.executeAssumeRole(AWSSecurityTokenServiceClient.java:498)
    1654700598727
        at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRole(AWSSecurityTokenServiceClient.java:467)
    1654700598727
        at 
com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider.newSession(STSAssumeRoleSessionCredentialsProvider.java:348)
    1654700598727
        at 
com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider.access$000(STSAssumeRoleSessionCredentialsProvider.java:44)
    1654700598727
        at 
com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider$1.call(STSAssumeRoleSessionCredentialsProvider.java:93)
    1654700598727
        at 
com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider$1.call(STSAssumeRoleSessionCredentialsProvider.java:90)
    1654700598727
        at 
com.amazonaws.auth.RefreshableTask.refreshValue(RefreshableTask.java:295)    
1654700598727
        at 
com.amazonaws.auth.RefreshableTask.blockingRefresh(RefreshableTask.java:251)    
1654700598727
        at 
com.amazonaws.auth.RefreshableTask.getValue(RefreshableTask.java:192)    
1654700598727
        at 
com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider.getCredentials(STSAssumeRoleSessionCredentialsProvider.java:320)
    1654700598727{code}

I read the code and from what I can see the Exception is being handled by 
S3AUtils here 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java#L240]

It does not further inspect the message and assumes that the 400 is indeed a 
bad request. Because of this it gets handled as a 
{color:#24292f}AWSBadRequestException{color} which then will lead to the 
request to fail instead of retry in the S3ARetryPolicy.

[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java#L215-L217]

 

A better approach seems to be to look at the sub-type and message of the 
original exception and handle it as a back-off and retry by throwing a 
different exception than {color:#24292f}AWSBadRequestException{color}

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr

[jira] [Created] (HADOOP-18286) S3a: allow custom retry policies

2022-06-10 Thread Jira
André Kelpe created HADOOP-18286:


 Summary: S3a: allow custom retry policies
 Key: HADOOP-18286
 URL: https://issues.apache.org/jira/browse/HADOOP-18286
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.3
Reporter: André Kelpe


This is related to HADOOP-18285

It would be great if the retry policies were pluggable so that one could inject 
their own implementation to overcome exceptional cases not correctly covered by 
the current policy. Currently {color:#24292f}S3ARetryPolicy{color} is 
hard-wired and can not be replaced w/o heavy sub-classing gymnastics.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18409) Support @Metric annotation on String fields similarly as with methods returning String

2022-08-19 Thread Jira
István Fajth created HADOOP-18409:
-

 Summary: Support @Metric annotation on String fields similarly as 
with methods returning String
 Key: HADOOP-18409
 URL: https://issues.apache.org/jira/browse/HADOOP-18409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: István Fajth
Assignee: István Fajth


In the Metrics2 framework, if a method is annotated with Metric annotation, and 
if it returns a String, then the String is understood as a TAG.

A field that is annotated with the Metric annotation on the other hand is not 
understood as a tag, even if the type of the annotation is set to 
Metric.Type.TAG, and gets ignored if the field type is String.

It would be great if Metric annotated String fields would have the same default 
behaviour as Metric annotated methods that return String value.

This has come up as part of HDDS-7120 (discussion is in the PR for that ticket).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18143) toString method of RpcCall throws IllegalArgumentException

2022-08-24 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

András Győri resolved HADOOP-18143.
---
Resolution: Won't Fix

> toString method of RpcCall throws IllegalArgumentException
> --
>
> Key: HADOOP-18143
> URL: https://issues.apache.org/jira/browse/HADOOP-18143
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: András Győri
>Assignee: András Győri
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We have observed breaking tests such as TestApplicationACLs. We have located 
> the root cause, which is HADOOP-18082. It seems that there is a concurrency 
> issue within ProtobufRpcEngine2. When using a debugger, the missing fields 
> are there, hence the suspicion of concurrency problem. The stack trace:
> {noformat}
> java.lang.IllegalArgumentException
>     at java.nio.Buffer.position(Buffer.java:244)
>     at 
> org.apache.hadoop.ipc.RpcWritable$ProtobufWrapper.readFrom(RpcWritable.java:131)
>     at org.apache.hadoop.ipc.RpcWritable$Buffer.getValue(RpcWritable.java:232)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest.getRequestHeader(ProtobufRpcEngine2.java:645)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest.toString(ProtobufRpcEngine2.java:663)
>     at java.lang.String.valueOf(String.java:3425)
>     at java.lang.StringBuilder.append(StringBuilder.java:516)
>     at org.apache.hadoop.ipc.Server$RpcCall.toString(Server.java:1328)
>     at java.lang.String.valueOf(String.java:3425)
>     at java.lang.StringBuilder.append(StringBuilder.java:516)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3097){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18427) Improve ZKDelegationTokenSecretManager#startThead With recommended methods.

2022-09-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-18427.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Improve ZKDelegationTokenSecretManager#startThead With recommended methods.
> ---
>
> Key: HADOOP-18427
> URL: https://issues.apache.org/jira/browse/HADOOP-18427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> When reading the code, I found a deprecated method to use. In 
> ZKDelegationTokenSecretManager#startThead, the code here uses the Curator's 
> EnsurePath,
> But EnsurePath is deprecated, use the recommended method instead
> public class EnsurePath
> Deprecated.
> Since 2.9.0 - Prefer 
> CuratorFramework.create().creatingParentContainersIfNeeded() or 
> CuratorFramework.exists().creatingParentContainersIfNeeded()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18452) Fix TestKMS#testKMSHAZooKeeperDelegationToken Failed By Hadoop-18427

2022-09-14 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-18452.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix TestKMS#testKMSHAZooKeeperDelegationToken Failed By Hadoop-18427
> 
>
> Key: HADOOP-18452
> URL: https://issues.apache.org/jira/browse/HADOOP-18452
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> The reason for the error is that the Znode is created directly without 
> checking the status of the Znode.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18451) Update hsqldb.version from 2.3.4 to 2.5.2

2022-09-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-18451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-18451.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update hsqldb.version from 2.3.4 to 2.5.2
> -
>
> Key: HADOOP-18451
> URL: https://issues.apache.org/jira/browse/HADOOP-18451
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> I plan to upgrade the version of hsqldb from 2.3.4 to 2.5.2 for the following 
> reasons:
> 1.Current version 2.3.4 is almost ~6 years old,Upgrading to new release to 
> keep up for new features and bug fixes.
> 2.I plan to increase the verification of the table building statement, which 
> needs to use the compatibility mode of Mysql and SqlServer. 
> The 2.5.2 version of hsqldb does better.
> 3.We are temporarily unable to upgrade hsqldb to version 2.6.0 because 
> version 2.6.0 depends on JDK11.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >