[jira] [Created] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-09 Thread Zhijie Shen (JIRA)
Zhijie Shen created HADOOP-11181:


 Summary: o.a.h.security.token.delegation.DelegationTokenManager 
should be more generalized to handle other DelegationTokenIdentifier
 Key: HADOOP-11181
 URL: https://issues.apache.org/jira/browse/HADOOP-11181
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Zhijie Shen
Assignee: Zhijie Shen


While DelegationTokenManager can set external secretManager, it have the 
assumption that the token is going to be 
o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
DelegationTokenIdentifier method to decode a token. 
{code}
  @SuppressWarnings(unchecked)
  public UserGroupInformation verifyToken(TokenDelegationTokenIdentifier
  token) throws IOException {
ByteArrayInputStream buf = new ByteArrayInputStream(token.getIdentifier());
DataInputStream dis = new DataInputStream(buf);
DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
id.readFields(dis);
dis.close();
secretManager.verifyToken(id, token.getPassword());
return id.getUser();
  }
{code}

It's not going to work it the token kind is other than 
web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it to 
RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has the 
customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-0.23-Build #1097

2014-10-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1097/

--
[...truncated 8263 lines...]
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.87 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.318 sec
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec
Running org.apache.hadoop.io.nativeio.TestNativeIO
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.157 sec
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.196 sec
Running org.apache.hadoop.io.TestMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec
Running org.apache.hadoop.io.TestUTF8
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.275 sec
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.041 sec
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.19 sec
Running org.apache.hadoop.io.TestSetFile
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.104 sec
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.32 sec
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.277 sec
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.537 sec
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.69 sec
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.518 sec
Running org.apache.hadoop.util.TestJarFinder
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.691 sec
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.292 sec
Running org.apache.hadoop.util.TestHostsFileReader
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.488 sec
Running org.apache.hadoop.util.TestStringUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.137 sec
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.259 sec
Running org.apache.hadoop.util.TestAsyncDiskService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.125 sec
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec
Running org.apache.hadoop.util.TestRunJar
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec
Running org.apache.hadoop.util.TestOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.195 sec
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.556 sec
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.43 sec
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.277 sec
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.362 sec
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.667 sec
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.139 sec
Running 

[jira] [Created] (HADOOP-11182) GraphiteSink emits wrong timestamps

2014-10-09 Thread Sascha Coenen (JIRA)
Sascha Coenen created HADOOP-11182:
--

 Summary: GraphiteSink emits wrong timestamps
 Key: HADOOP-11182
 URL: https://issues.apache.org/jira/browse/HADOOP-11182
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1, 2.5.0
Reporter: Sascha Coenen


the org.apache.hadoop.metrics2.sink.GraphiteSink class emits metrics at the 
configured time period, but the timestamps written only change every 128 
seconds, even it the configured time period in the configuration file is much 
shorter.

This is due to a bug in line 93:

{code:java}
092// Round the timestamp to second as Graphite accepts it in such 
format.
093int timestamp = Math.round(record.timestamp() / 1000.0f);
{code}

The timestamp property is a long and is divided by a float which yields a 
result that is not precise enough and yields same valued results for timestamps 
that lie up to 128 seconds apart. Also, the result is then written into an int 
variable.

One solution would be to divide by 1000.0d, but the best fix would be to not 
even convert to a decimal format in the first place. Instead one could replace 
the line with the following:

{code:java}
   long timestamp = record.timestamp() / 1000L;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11183) Memory-based S3AOutputstream

2014-10-09 Thread Thomas Demoor (JIRA)
Thomas Demoor created HADOOP-11183:
--

 Summary: Memory-based S3AOutputstream
 Key: HADOOP-11183
 URL: https://issues.apache.org/jira/browse/HADOOP-11183
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.4.0
Reporter: Thomas Demoor


Currently s3a buffers files on disk(s) before uploading. This JIRA investigates 
adding a memory-based upload implementation.

The motivation is evidently performance: this would be beneficial for users 
with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on 
an S3-compatible object store (FYI: my contributions are made in name of 
Amplidata). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10809) hadoop-azure: page blob support

2014-10-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reopened HADOOP-10809:


Hi Eric.  I'm reopening this issue.  I'll resolve again when I merge all of the 
WASB patches to a release branch.  Most likely, we'd be targeting 2.7.0 after 
the dust settles on the 2.6.0 release.

 hadoop-azure: page blob support
 ---

 Key: HADOOP-10809
 URL: https://issues.apache.org/jira/browse/HADOOP-10809
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Mike Liddell
Assignee: Eric Hanson
 Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
 HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
 HADOOP-10809.07.patch, HADOOP-10809.08.patch, HADOOP-10809.09.patch, 
 HADOOP-10809.1.patch, HADOOP-10809.10.patch, HADOOP-10809.11.patch


 Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
 Block-blobs are the general purpose kind that support convenient APIs and are 
 the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
 Page-blobs use the same namespace as block-blobs but provide a different 
 low-level feature set.  Most importantly, page-blobs can cope with an 
 effectively infinite number of small accesses whereas block-blobs can only 
 tolerate 50K appends before relatively manual rewriting of the data is 
 necessary.  A simple analogy is that page-blobs are like a regular disk and 
 the basic API is like a low-level device driver.
 See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
 introductory material.
 The primary driving scenario for page-blob support is for HBase transaction 
 log files which require an access pattern of many small writes.  Additional 
 scenarios can also be supported.
 Configuration:
 The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
 determine whether to create a block- or page-blob.  To permit scenarios where 
 application code doesn't know about the details of azure storage we would 
 like the configuration to be Aspect-style, ie configured by the Administrator 
 and transparent to the application. The current solution is to use hadoop 
 configuration to declare a list of page-blob folders -- Azure Filesystem for 
 Hadoop will create files in these folders using page-blob flavor.  The 
 configuration key is fs.azure.page.blob.dir, and description can be found 
 in AzureNativeFileSystemStore.java.
 Code changes:
 - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
 specialized BlockBlobWrapper vs PageBlobWrapper
 - introduction of PageBlob support (read, write, etc)
 - miscellaneous changes such as umask handling, implementation of 
 createNonRecursive(), flush/hflush/hsync.
 - new unit tests.
 Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
 Mike Liddell.
 Also included in the patch is support for atomic folder rename over the Azure 
 blob store through the Azure file system layer for Hadoop. See the README 
 file for more details, including how to use the fs.azure.atomic.rename.dir 
 configuration variable to control where atomic folder rename logic is 
 applied. By default, folders under /hbase have atomic rename applied, which 
 is needed for correct operation of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11185) There should be a way to disable a kill -9 during stop

2014-10-09 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-11185:
-

 Summary: There should be a way to disable a kill -9 during stop
 Key: HADOOP-11185
 URL: https://issues.apache.org/jira/browse/HADOOP-11185
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ravi Prakash


eg. hadoop-common-project/hadoop-common/bin/src/main/bin/hadoop-functions.sh 
calls kill -9 after some time. This might not be the best thing to do for some 
processes (if HA is not enabled) . There should be ability to disable this kill 
-9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11186) documentation should talk about hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes

2014-10-09 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11186:
-

 Summary: documentation should talk about 
hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes
 Key: HADOOP-11186
 URL: https://issues.apache.org/jira/browse/HADOOP-11186
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


The documentation should talk about hadoop.htrace.spanreceiver.classes, not 
hadoop.trace.spanreceiver.classes (note the H)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11187) NameNode - KMS communication fails after a long period of inactivity

2014-10-09 Thread Arun Suresh (JIRA)
Arun Suresh created HADOOP-11187:


 Summary: NameNode - KMS communication fails after a long period of 
inactivity
 Key: HADOOP-11187
 URL: https://issues.apache.org/jira/browse/HADOOP-11187
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun Suresh


As reported by [~atm] :

The issue is due to the authentication token that the NN has to talk to the KMS 
is expiring, AND the signature secret provider in the KMS authentication filter 
is discarding the old secret after 2x the authentication token validity period.
If the token being supplied is under 1x the validity lifetime then the token 
will authenticate just fine. If the token being supplied is between 1x-2x the 
validity lifetime, then the token can be validated but it will be expired, so a 
401 will be returned to the client and it will get a new token. But if the 
token being supplied is greater than 2x the validity lifetime, then the KMS 
authentication filter will not even be able to validate the token, and will 
return a 403, which will cause the client to not retry authentication to the 
KMS.

The KMSClientProvider needs to be modified to retry authentication even in the 
above case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)