[jira] [Created] (HDFS-5207) In BlockPlacementPolicy, the type of writer in parameters of chooseTarget() should be updated from DatanodeDescriptor to Node

2013-09-13 Thread Junping Du (JIRA)
Junping Du created HDFS-5207:


 Summary: In BlockPlacementPolicy, the type of writer in parameters 
of chooseTarget() should be updated from DatanodeDescriptor to Node
 Key: HDFS-5207
 URL: https://issues.apache.org/jira/browse/HDFS-5207
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Junping Du
Assignee: Junping Du


We should change chooseTarget(..., DatanodeDescriptor writer, ...) to 
chooseTarget (..., node writer, ...) as the only important property of writer 
is to identify other nodes' location relationship so more generic one could be 
better. It also helps to cover cases that client node is not a Datanode also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5206) Some unreferred constants in DFSConfigKeys should be cleanuped

2013-09-13 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5206:


 Summary: Some unreferred constants in DFSConfigKeys should be 
cleanuped
 Key: HDFS-5206
 URL: https://issues.apache.org/jira/browse/HDFS-5206
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


There are some constants in DFSConfigKeys.java which are no longer referred 
from any code.
Unreferred constants are below.

DFS_STREAM_BUFFER_SIZE_KEY
DFS_STREAM_BUFFER_SIZE_DEFAULT
DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT
DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT
DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
DFS_NAMENODE_HOSTS_KEY
DFS_NAMENODE_HOSTS_EXCLUDE_KEY
DFS_HTTPS_ENABLE_DEFAULT
DFS_DATANODE_HTTPS_DEFAULT_PORT
DFS_DF_INTERVAL_DEFAULT
DFS_WEB_UGI_KEY

I think we should cleanup DFSConfigKeys.java.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5053) NameNode should invoke DataNode APIs to coordinate caching

2013-09-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5053.
---

   Resolution: Fixed
Fix Version/s: HDFS-4949

> NameNode should invoke DataNode APIs to coordinate caching
> --
>
> Key: HDFS-5053
> URL: https://issues.apache.org/jira/browse/HDFS-5053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Fix For: HDFS-4949
>
> Attachments: hdfs-5053-1.patch, hdfs-5053-2.patch, hdfs-5053-3.patch
>
>
> The NameNode should invoke the DataNode APIs to coordinate caching.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5201) NativeIO: consolidate getrlimit into NativeIO#getMemlockLimit

2013-09-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5201.


Resolution: Fixed

> NativeIO: consolidate getrlimit into NativeIO#getMemlockLimit
> -
>
> Key: HDFS-5201
> URL: https://issues.apache.org/jira/browse/HDFS-5201
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-4949
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: HDFS-4949
>
> Attachments: HDFS-5201-caching.001.patch, HDFS-5201-caching.002.patch
>
>
> Let's consolidate {{NativeIO#POSIX#getrlimit}} into 
> {{NativeIO#getMemlockLimit}}.  This avoids a few issues with the current 
> function: it is not available on Windows, it relied on Linux-specific 
> constants, and it currently returns a string which has to be parsed rather 
> than a long.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5205) Include NFS jars in the maven assembly

2013-09-13 Thread Mark Grover (JIRA)
Mark Grover created HDFS-5205:
-

 Summary: Include NFS jars in the maven assembly
 Key: HDFS-5205
 URL: https://issues.apache.org/jira/browse/HDFS-5205
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Mark Grover


The nfs jars (hadoop-hdfs-nfs.jar from hadoop-hdfs-project and hadoop-nfs.jar 
from hadoop-common-project) are currently not a part of the maven assembly 
built when specifying the distribution flag to the mvn build. We should make 
these jars (and the dependencies) a part of the assembly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4952) dfs -ls hftp:// fails on secure hadoop2 cluster

2013-09-13 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy resolved HDFS-4952.
-

Resolution: Fixed

Duplicate of HDFS-3983

> dfs -ls hftp:// fails on secure hadoop2 cluster
> ---
>
> Key: HDFS-4952
> URL: https://issues.apache.org/jira/browse/HDFS-4952
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yesha Vora
>Priority: Blocker
>
> Running :hadoop dfs -ls hftp://:/A
> WARN fs.FileSystem: Couldn't connect to http://:50470, assuming 
> security is disabled
> ls: Security enabled but user not authenticated by filter

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5204) Stub implementation of getrlimit for Windows.

2013-09-13 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5204:
---

 Summary: Stub implementation of getrlimit for Windows.
 Key: HDFS-5204
 URL: https://issues.apache.org/jira/browse/HDFS-5204
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-4949
Reporter: Chris Nauroth


The HDFS-4949 feature branch adds a JNI wrapper over the {{getrlimit}} 
function.  This function does not exist on Windows.  We need to provide a stub 
implementation so that the codebase can compile on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5203) Concurrent clients that add a cache directive on the same path may prematurely uncache from each other.

2013-09-13 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5203:
---

 Summary: Concurrent clients that add a cache directive on the same 
path may prematurely uncache from each other.
 Key: HDFS-5203
 URL: https://issues.apache.org/jira/browse/HDFS-5203
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Chris Nauroth


When a client adds a cache directive, we assign it a unique ID and return that 
ID to the client.  If multiple clients add a cache directive for the same path, 
then we return the same ID.  If one client then removes the cache entry for 
that ID, then it is removed for all clients.  Then, when this change becomes 
visible in subsequent cache reports, the datanodes may {{munlock}} the block 
before the other clients are done with it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5195) Prevent passing null pointer to mlock and munlock.

2013-09-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5195.
-

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

I committed this to the HDFS-4949 branch.  Thanks for the review, Colin.

> Prevent passing null pointer to mlock and munlock.
> --
>
> Key: HDFS-5195
> URL: https://issues.apache.org/jira/browse/HDFS-5195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: HDFS-4949
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HDFS-4949
>
> Attachments: HDFS-5195.1.patch, HDFS-5195.2.patch
>
>
> According to JNI documentation, it is optional for the JVM to support the 
> {{GetDirectBufferAddress}} function.  If unsupported, then the function will 
> return null.  This is probably a very rare thing, but let's be defensive by 
> checking the return value for null and throwing an exception instead of 
> passing null down to {{mlock}} and {{munlock}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5202) umbrella JIRA for Windows support in HDFS caching

2013-09-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5202:
--

 Summary: umbrella JIRA for Windows support in HDFS caching
 Key: HDFS-5202
 URL: https://issues.apache.org/jira/browse/HDFS-5202
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Colin Patrick McCabe


This is an umbrella JIRA for adding Windows support for HDFS caching.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5201) NativeIO: consolidate getrlimit into NativeIO#getMemlockLimit

2013-09-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5201:
--

 Summary: NativeIO: consolidate getrlimit into 
NativeIO#getMemlockLimit
 Key: HDFS-5201
 URL: https://issues.apache.org/jira/browse/HDFS-5201
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Let's consolidate {{NativeIO#POSIX#getrlimit}} into 
{{NativeIO#getMemlockLimit}}.  This avoids a few issues with the current 
function: it is not available on Windows, it relied on Linux-specific 
constants, and it currently returns a string which has to be parsed rather than 
a long.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5198) NameNodeRpcServer must not send back DNA_FINALIZE in reply to a cache report

2013-09-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5198.


Resolution: Fixed

committed to branch

> NameNodeRpcServer must not send back DNA_FINALIZE in reply to a cache report
> 
>
> Key: HDFS-5198
> URL: https://issues.apache.org/jira/browse/HDFS-5198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-4949
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5198-caching.001.patch
>
>
> NameNodeRpcServer must not send back a DNA_FINALIZE command in reply to a 
> cache report.  DNA_FINALIZE instructs the DN to complete an upgrade process-- 
> not something we need or want for handling cache reports.  Thanks to Chris 
> Nauroth for spotting this error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5200) Datanode should have compatibility mode for sending combined block reports

2013-09-13 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5200:
---

 Summary: Datanode should have compatibility mode for sending 
combined block reports
 Key: HDFS-5200
 URL: https://issues.apache.org/jira/browse/HDFS-5200
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5199) Add more debug trace for NFS READ and WRITE

2013-09-13 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5199:


 Summary: Add more debug trace for NFS READ and WRITE
 Key: HDFS-5199
 URL: https://issues.apache.org/jira/browse/HDFS-5199
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li


Before more sophisticated utility is added, the simple trace indicating 
start/end serving request can help debug errors and collect statistic 
information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5198) NameNodeRpcServer must not send back DNA_FINALIZE in reply to a cache report

2013-09-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5198:
--

 Summary: NameNodeRpcServer must not send back DNA_FINALIZE in 
reply to a cache report
 Key: HDFS-5198
 URL: https://issues.apache.org/jira/browse/HDFS-5198
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


NameNodeRpcServer must not send back a DNA_FINALIZE command in reply to a cache 
report.  DNA_FINALIZE instructs the DN to complete an upgrade process-- not 
something we need or want for handling cache reports.  Thanks to Chris Nauroth 
for spotting this error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Measuring bandwidth in 2.1.x

2013-09-13 Thread hilfi alkaff
Yes, I'm aware of the tool. I was thinking of using this for real-time
network bandwidth management. Using vnstat won't be able to distinguish,
let's say, monitoring bandwidth of multiple maps running on the same
machine.


On Fri, Sep 13, 2013 at 12:26 PM, Chris Embree  wrote:

> vnstat should be able to do this.  It has a "live" mode or can
> generate  hourly reports much like sar.  Google for vnstat to find it.
>
> On 9/13/13, hilfi alkaff  wrote:
> > Hi,
> >
> > Is there any way that we could measure the actual bandwidth used by the
> > nodes when running MapReduce jobs (For example, when the mappers are
> > sending data to the reducers during the shuffle phase).
> >
> > Thanks
> >
> > --
> > ~Hilfi Alkaff~
> >
>



-- 
~Hilfi Alkaff~


Build failed in Jenkins: Hadoop-Hdfs-trunk #1521

2013-09-13 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-9350. Hadoop not building against Java7 on OSX

[jing9] HDFS-5192. NameNode may fail to start when 
dfs.client.test.drop.namenode.response.number is set. Contributed by Jing Zhao.

[brandonli] HDFS-5067 Support symlink operations in NFS gateway. Contributed by 
Brandon Li

[wang] HADOOP-9958. Add old constructor back to DelegationTokenInformation to 
unbreak downstream builds. (Andrew Wang)

[cnauroth] YARN-1078. TestNodeManagerResync, TestNodeManagerShutdown, and 
TestNodeStatusUpdater fail on Windows. Contributed by Chuan Liu.

[devaraj] MAPREDUCE-5164. mapred job and queue commands omit 
HADOOP_CLIENT_OPTS. Contributed by Nemon Lou.

--
[...truncated 11243 lines...]
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.164 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 360.166 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.266 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.038 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.572 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.132 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.928 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.217 sec - in 
org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.191 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.448 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.194 sec - 
in org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.253 sec - in 
org.apache.hadoop.hdfs.TestFileInputStreamCache
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.963 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.5 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.967 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.06 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.383 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.867 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.626 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.985 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.216 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.235 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.386 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.165 sec - in 
org.apach

Hadoop-Hdfs-trunk - Build # 1521 - Failure

2013-09-13 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1521/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11436 lines...]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:56:02.215s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [5.888s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:56:09.121s
[INFO] Finished at: Fri Sep 13 13:30:23 UTC 2013
[INFO] Final Memory: 40M/199M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-5067
Updating HDFS-5192
Updating HADOOP-9958
Updating MAPREDUCE-5164
Updating YARN-1078
Updating HADOOP-9350
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Created] (HDFS-5197) Document dfs.cachereport.intervalMsec in hdfs-default.xml.

2013-09-13 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5197:
---

 Summary: Document dfs.cachereport.intervalMsec in hdfs-default.xml.
 Key: HDFS-5197
 URL: https://issues.apache.org/jira/browse/HDFS-5197
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, documentation
Affects Versions: HDFS-4949
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HDFS-5197.1.patch

We can add a description of {{dfs.cachereport.intervalMsec}} to 
hdfs-default.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-0.23-Build - Build # 729 - Still Failing

2013-09-13 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/729/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7870 lines...]
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3313,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #729

2013-09-13 Thread Apache Jenkins Server
See 

--
[...truncated 7677 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[270,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6301,30]
 cannot find sym

Re: Measuring bandwidth in 2.1.x

2013-09-13 Thread Chris Embree
vnstat should be able to do this.  It has a "live" mode or can
generate  hourly reports much like sar.  Google for vnstat to find it.

On 9/13/13, hilfi alkaff  wrote:
> Hi,
>
> Is there any way that we could measure the actual bandwidth used by the
> nodes when running MapReduce jobs (For example, when the mappers are
> sending data to the reducers during the shuffle phase).
>
> Thanks
>
> --
> ~Hilfi Alkaff~
>


Measuring bandwidth in 2.1.x

2013-09-13 Thread hilfi alkaff
Hi,

Is there any way that we could measure the actual bandwidth used by the
nodes when running MapReduce jobs (For example, when the mappers are
sending data to the reducers during the shuffle phase).

Thanks

-- 
~Hilfi Alkaff~


[jira] [Created] (HDFS-5196) Provide more snapshot information in WebUI

2013-09-13 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5196:


 Summary: Provide more snapshot information in WebUI
 Key: HDFS-5196
 URL: https://issues.apache.org/jira/browse/HDFS-5196
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Haohui Mai
Priority: Minor


The WebUI should provide more detailed information about snapshots, such as all 
snapshottable directories and corresponding number of snapshots (suggested in 
HDFS-4096).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira