[jira] [Created] (HDFS-3316) The tar ball doesn't include jsvc any more

2012-04-24 Thread Owen O'Malley (JIRA)
Owen O'Malley created HDFS-3316:
---

 Summary: The tar ball doesn't include jsvc any more
 Key: HDFS-3316
 URL: https://issues.apache.org/jira/browse/HDFS-3316
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 1.0.3


The current release tarballs on the 1.0 branch don't include jsvc by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




test HDFS

2012-04-24 Thread chengyao
HI.
I want to test hdfs. what tools should i choose? can you tell me ? thank you.




程瑶  
   软件技术评估组
   浪潮(北京)电子信息产业有限公司

   地址:山东省济南市高新区浪潮路1036号浪潮集团院内5号楼
   邮编:250101
   电话:0531-85106427
   mail: cheng...@inspur.com

RE: test HDFS

2012-04-24 Thread Joe Bounour
Hello
Have you looked at TestDFSIO benchmark
http://answers.oreilly.com/topic/460-how-to-benchmark-a-hadoop-cluster/


-Original Message-
From: chengyao [mailto:cheng...@inspur.com] 
Sent: Monday, April 23, 2012 11:22 PM
To: hdfs-dev
Subject: test HDFS

HI.
I want to test hdfs. what tools should i choose? can you tell me ? thank you.




程瑶  
   软件技术评估组
   浪潮(北京)电子信息产业有限公司

   地址:山东省济南市高新区浪潮路1036号浪潮集团院内5号楼
   邮编:250101
   电话:0531-85106427
   mail: cheng...@inspur.com


RE: test HDFS

2012-04-24 Thread Amith D K
U can check

TestDFSIO,
TerraGen ,TerraSort,TerraValidate
NNBench,
MRBench(For MapReduce)

Above are very well known tools.

Run 
bin/hadoop jar hadoop-*test*.jar
# change to your Hadoop installation directory;
# if you followed my Hadoop tutorials, this is /usr/local/hadoop
$ cd /usr/local/hadoop
$ bin/hadoop jar hadoop-*test*.jar
An example program must be given as the first argument.
Valid program names are:
  DFSCIOTest: Distributed i/o benchmark of libhdfs.
  DistributedFSCheck: Distributed checkup of the file system consistency.
  MRReliabilityTest: A program that tests the reliability of the MR framework 
by injecting faults/failures
  TestDFSIO: Distributed i/o benchmark.
  dfsthroughput: measure hdfs throughput
  filebench: Benchmark SequenceFile(Input|Output)Format (block,record 
compressed and uncompressed), Text(Input|Output)Format (compressed and 
uncompressed)
  loadgen: Generic map/reduce load generator
  mapredtest: A map/reduce test check.
  mrbench: A map/reduce benchmark that can create many small jobs
  nnbench: A benchmark that stresses the namenode.
  testarrayfile: A test for flat files of binary key/value pairs.
  testbigmapoutput: A map/reduce program that works on a very big 
non-splittable file and does identity map/reduce
  testfilesystem: A test for FileSystem read/write.
  testipc: A test for ipc.
  testmapredsort: A map/reduce program that validates the map-reduce 
framework's sort.
  testrpc: A test for rpc.
  testsequencefile: A test for flat files of binary key value pairs.
  testsequencefileinputformat: A test for sequence file input format.
  testsetfile: A test for flat files of binary key/value pairs.
  testtextinputformat: A test for text input format.
  threadedmapbench: A map/reduce benchmark that compares the performance of 
maps with multiple spills over maps with 1 spill


Thanks and Regards
Amith

From: Joe Bounour [jboun...@ddn.com]
Sent: Tuesday, April 24, 2012 2:30 PM
To: hdfs-dev@hadoop.apache.org; chengyao
Subject: RE: test HDFS

Hello
Have you looked at TestDFSIO benchmark
http://answers.oreilly.com/topic/460-how-to-benchmark-a-hadoop-cluster/


-Original Message-
From: chengyao [mailto:cheng...@inspur.com]
Sent: Monday, April 23, 2012 11:22 PM
To: hdfs-dev
Subject: test HDFS

HI.
I want to test hdfs. what tools should i choose? can you tell me ? thank you.




程瑶
   软件技术评估组
   浪潮(北京)电子信息产业有限公司

   地址:山东省济南市高新区浪潮路1036号浪潮集团院内5号楼
   邮编:250101
   电话:0531-85106427
   mail: cheng...@inspur.com

How to Implement the WritableRpcEngine and the code procedure

2012-04-24 Thread Eric Liang
Hello,

There are two different kinds of RPC engine: RPC_WRITABLE
 RPC_PROTOCOL_BUFFER
In the NameNode side, the class NameNodeRpcServer is used to handle RPC
calls from client and DataNode.

But I see in the constructor function

public NameNodeRpcServer(Configuration conf, NameNode nn) throws
IOException {
..
this.clientRpcServer = RPC.getServer(
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB.class,
clientNNPbService, socAddr.getHostName(),
socAddr.getPort(), handlerCount, false, conf,
namesystem.getDelegationTokenSecretManager());
...
}

Inside this function there are only protocolPB implementation in the
construction of RPC.Server,
I can not see the WritableRpcEngine implementation.

And when I trace into the class WritableRpcInvoker,
it is the absolutely invoke function. I meat that
{

protocolImpl =
server.getProtocolImplMap(RpcKind.RPC_WRITABLE).get(pv);

}

I don't know which is the protcolImpl of RPC_WRITABLE, in the constructor
function only the implementation of RPC_PROTOCOL_BUFFER is created.

Anyone can give me some suggestion or advice?

Thank you
Eric


Hadoop-Hdfs-trunk - Build # 1024 - Still Failing

2012-04-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1024/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11094 lines...]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-project ---
[INFO] Wrote classpath file 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/classes/mrapp-generated-classpath'.
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:29.092s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [37.156s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [11.121s]
[INFO] Apache Hadoop HDFS Fuse ... SUCCESS [0.039s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.033s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:18.264s
[INFO] Finished at: Tue Apr 24 11:41:03 UTC 2012
[INFO] Final Memory: 87M/740M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3812
Updating MAPREDUCE-4133
Updating MAPREDUCE-4141
Updating MAPREDUCE-4190
Updating HADOOP-8284
Updating HADOOP-8285
Updating HDFS-3312
Updating HADOOP-8152
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #1024

2012-04-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1024/changes

Changes:

[eli] HADOOP-8152. Expand public APIs for security library classes. Contributed 
by Aaron T. Myers

[tucu] MAPREDUCE-4141. clover integration broken, also mapreduce poms are 
pulling in clover as a dependency. (phunt via tucu)

[tucu] HADOOP-8284. clover integration broken, also mapreduce poms are pulling 
in clover as a dependency. (phunt via tucu)

[szetszwo] HDFS-3312. In HftpFileSystem, the namenode URI is non-secure but the 
delegation tokens have to use secure URI.  Contributed by Daryn Sharp

[bobby] MAPREDUCE-4133. MR over viewfs is broken (John George via bobby)

[bobby] MAPREDUCE-4190. Improve web UI for task attempts userlog link (Tom 
Graves via bobby)

[sradia] HADOOP-8285 Use ProtoBuf for RpcPayLoadHeader (sanjay radia)

[bobby] MAPREDUCE-3812. Lower default allocation sizes, fix allocation 
configurations and document them (Harsh J via bobby)

--
[...truncated 10901 lines...]
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.FilterParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.FsPathParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.GetOpParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.GroupParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.LenParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.ModifiedTimeParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.OffsetParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.OverwriteParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.OwnerParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.PermissionParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.PostOpParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.PutOpParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.ReplicationParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSParams.ToPathParam.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSServerWebApp.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSExceptionProvider.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/AuthFilter.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/CheckUploadContentTypeFilter.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSReleaseFilter.html...
Generating 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/org/apache/hadoop/fs/http/server//class-use/HttpFSServer.html...
Generating 

[jira] [Created] (HDFS-3317) Balancer error text changed again, not sending Auth, is this expected?

2012-04-24 Thread patrick white (JIRA)
patrick white created HDFS-3317:
---

 Summary: Balancer error text changed again, not sending Auth, is 
this expected?
 Key: HDFS-3317
 URL: https://issues.apache.org/jira/browse/HDFS-3317
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Affects Versions: 1.0.2
 Environment: QE
Reporter: patrick white
Priority: Minor


We have a negative test case for Balancer which looks for a report of a 
not-authorized user attempting to run Balancer. This recently failed because 
the return string changed, the authentication (auth:KERBEROS) is no longer 
sent. The same case failed last December because this auth substr was added, in 
205. The previous string looks like this:

'Received an IO exception: User USER (auth:KERBEROS) is not authorized for 
protocol interface'

And now looks like this:

'Received an IO exception: User USER is not authorized for protocol interface'

While a trivial matter for the product, this fails our test automation, 
requiring a follow up to confirm the change is actually expected, update the 
tests, so forth. This makes the change non-trivial for validation and test 
maintenance purposes, so it would be very good to have a definitive 
understanding of what to expect going forward.
Do we expect additional changes here?



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3318) Hftp hangs on transfers 2GB

2012-04-24 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-3318:
-

 Summary: Hftp hangs on transfers 2GB
 Key: HDFS-3318
 URL: https://issues.apache.org/jira/browse/HDFS-3318
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker


Hftp transfers 2GB hang after the transfer is complete.  The problem appears 
to be caused by java internally using an int for the content length.  When it 
overflows 2GB, it won't check the bounds of the reads on the input stream.  The 
client continues reading after all data is received, and the client blocks 
until the server times out the connection -- _many_ minutes later.  In 
conjunction with hftp timeouts, all transfers 2G fail with a read timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2999) DN metrics should include per-disk utilization

2012-04-24 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-2999.
--

  Resolution: Won't Fix
Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)

Operators can monitor this using more direct means.

 DN metrics should include per-disk utilization
 --

 Key: HDFS-2999
 URL: https://issues.apache.org/jira/browse/HDFS-2999
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1
Reporter: Aaron T. Myers

 We should have per-dfs.data.dir metrics in the DN's metrics report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3319) DFSOutputStream should not start a thread in constructors

2012-04-24 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-3319:


 Summary: DFSOutputStream should not start a thread in constructors
 Key: HDFS-3319
 URL: https://issues.apache.org/jira/browse/HDFS-3319
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


DFSOutputStream starts the DataStreamer thread in its constructors.  This is a 
known bad programming practice.  It will generate findbugs warnings if the 
class is not final.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira