[jira] [Created] (HADOOP-14507) extend per-bucket secret key config with explicit getPassword() on fs.s3a.$bucket.secret,key

2017-06-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14507:
---

 Summary: extend per-bucket secret key config with explicit 
getPassword() on fs.s3a.$bucket.secret,key
 Key: HADOOP-14507
 URL: https://issues.apache.org/jira/browse/HADOOP-14507
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


Per-bucket jceks support turns out to be complex as you have to manage multiple 
jecks files & configure the client to ask for the right one. This is because 
we're calling {{Configuration.getPassword{"fs,s3a.secret.key")}. 

If before that, we do a check for the explict id, key, session key in the 
properties {{fs.s3a.$bucket.secret}} ( & c), we could have a single JCEKs file 
with all the secrets for different bucket. You would only need to explicitly 
point the base config to the secrets file, and the right credentials would be 
picked up, if set



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14508) TestDFSIO throws NPE when set -sequential argument.

2017-06-08 Thread wenxin he (JIRA)
wenxin he created HADOOP-14508:
--

 Summary: TestDFSIO throws NPE when set -sequential argument.
 Key: HADOOP-14508
 URL: https://issues.apache.org/jira/browse/HADOOP-14508
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha4
Reporter: wenxin he
Assignee: wenxin he


Benchmark tool TestDFSIO throws NPE when set {{-sequential}} due to 
uninitialized {{ioer.stream}} in {{TestDFSIO#sequentialTest}}.

More descriptions, stack traces see comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/428/

[Jun 7, 2017 4:21:07 PM] (varunsaxena) YARN-6604. Allow metric TTL for 
Application table to be specified
[Jun 7, 2017 8:34:47 AM] (brahma) MAPREDUCE-6676. NNBench should Throw 
IOException when rename and delete
[Jun 7, 2017 8:41:06 PM] (Carlo Curino) YARN-6634. [API] Refactor 
ResourceManager WebServices to make API
[Jun 7, 2017 8:54:52 PM] (liuml07) HADOOP-14500. Azure:
[Jun 7, 2017 10:52:52 PM] (jzhuge) HDFS-11861. 
ipc.Client.Connection#sendRpcRequest should log request




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unc

[jira] [Created] (HADOOP-14509) InconsistentAmazonS3Client adds extra paths to listStatus() after delete.

2017-06-08 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14509:
-

 Summary: InconsistentAmazonS3Client adds extra paths to 
listStatus() after delete.
 Key: HADOOP-14509
 URL: https://issues.apache.org/jira/browse/HADOOP-14509
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


I identified a potential issue in code that simulates list-after-delete 
inconsistency when code reviewing HADOOP-13760.  It appeared to work for the 
existing test cases but now that we are using the inconsistency injection code 
for general testing (e.g. HADOOP-14488) we need to make sure this stuff is 
correct.  

Deliverable is to make sure {{InconsistentAmazonS3Client#restoreListObjects()}} 
is correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/

[Jun 7, 2017 4:21:07 PM] (varunsaxena) YARN-6604. Allow metric TTL for 
Application table to be specified
[Jun 7, 2017 8:34:47 AM] (brahma) MAPREDUCE-6676. NNBench should Throw 
IOException when rename and delete
[Jun 7, 2017 8:41:06 PM] (Carlo Curino) YARN-6634. [API] Refactor 
ResourceManager WebServices to make API
[Jun 7, 2017 8:54:52 PM] (liuml07) HADOOP-14500. Azure:
[Jun 7, 2017 10:52:52 PM] (jzhuge) HDFS-11861. 
ipc.Client.Connection#sendRpcRequest should log request




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-mvninstall-root.txt
  [492K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [140K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [288K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
 

[jira] [Created] (HADOOP-14510) Use error code detail in AWS server responses for finer grained exceptions

2017-06-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14510:
---

 Summary: Use error code detail in AWS server responses for finer 
grained exceptions
 Key: HADOOP-14510
 URL: https://issues.apache.org/jira/browse/HADOOP-14510
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.1
Reporter: Steve Loughran


{{S3Utils.translateException()}} maps HTTP status code to exceptions. We aren't 
looking at the body of the reponses though, except when handling a 301 redirect.

We should use the exit code to fine tune responses, especially 400 & 401/403.

Right now I'm not sure we are even getting that error code into the text.

see: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when dong the rename

2017-06-08 Thread Duo Xu (JIRA)
Duo Xu created HADOOP-14512:
---

 Summary: WASB atomic rename should not throw exception if the file 
is neither in src nor in dst when dong the rename
 Key: HADOOP-14512
 URL: https://issues.apache.org/jira/browse/HADOOP-14512
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Duo Xu


During atomic rename operation, WASB creates a rename pending json file to 
document which files need to be renamed and the destination. Then WASB will 
read this file and rename all the files one by one.

There is a recent customer incident in HBase showing a potential bug in the 
atomic rename implementation,

For example, below is a rename pending json file,

{code}
{
  FormatVersion: "1.0",
  OperationUTCTime: "2017-04-29 06:08:57.465",
  OldFolderName: "hbase\/data\/default\/abc",
  NewFolderName: "hbase\/.tmp\/data\/default\/abc",
  FileList: [
".tabledesc",
".tabledesc\/.tableinfo.01",
".tmp",
"08e698e0b7d4132c0456b16dcf3772af",
"08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
"08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
"08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
"08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
"08e698e0b7d4132c0456b16dcf3772af\/0",
 "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
"08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
"08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
  ]
}

{code}  

When HBase regionserver process (underlying is using WASB driver) was renaming  
"08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver process 
crashed or the VM got rebooted due to system maintenence. When the regionserver 
process started running again, it found the rename pending json file and tried 
to redo the rename operation. 

However, when it read the first file ".tabledesc" in the file list, it could 
not find this file in src folder and it also could not find the file in 
destination folder. It could not find it in src folder because the file had 
already been renamed/moved to the destination folder. It could not find it in 
destination folder because when HBase starts, it will clean up all the files 
under /hbase/.tmp.

The current implementation will throw exceptions saying
{code}
else {
throw new IOException(
"Attempting to complete rename of file " + srcKey + "/" + fileName
+ " during folder rename redo, and file was not found in source "
+ "or destination.");
  }
{code}

This will cause HBase HMaster initialization failure and restart HMaster will 
not work because the same exception will throw again.

My proposal is that if during the redo, WASB finds a file not in src and not in 
dst, WASB should just skip this file and process the next file rather than 
throw the error and let user manually fix it. Reasons are

1. Since the rename pending json file contains file A, if the file A is not in 
src, it must have been renamed.
2. if the file A is not in src and not in dst, the upper layer service must 
have  removed it. One thing to note is that during the atomic rename, the 
folder is locked. So the only situation the file gets deleted is when VM 
reboots or service process crashes. When service process restarts, there might 
be some operations happening before the atomic rename redo, like the HBase 
example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org