[jira] [Created] (HADOOP-8808) Update FsShell documentation to mention deprecation of some of the commands, and mention alternatives

2012-09-13 Thread Hemanth Yamijala (JIRA)
Hemanth Yamijala created HADOOP-8808:


 Summary: Update FsShell documentation to mention deprecation of 
some of the commands, and mention alternatives
 Key: HADOOP-8808
 URL: https://issues.apache.org/jira/browse/HADOOP-8808
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala


In HADOOP-7286, we deprecated the following 3 commands dus, lsr and rmr, in 
favour of du -s, ls -r and rm -r respectively. The FsShell documentation should 
be updated to mention these, so that users can start switching. Also, there are 
places where we refer to the deprecated commands as alternatives. This can be 
changed as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8807) Update README and website to reflect HADOOP-8662

2012-09-13 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8807:
---

 Summary: Update README and website to reflect HADOOP-8662
 Key: HADOOP-8807
 URL: https://issues.apache.org/jira/browse/HADOOP-8807
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Eli Collins


HADOOP-8662 removed the various tabs from the website. Our top-level README.txt 
and the generated docs refer to them (eg hadoop.apache.org/core, /hdfs etc). 
Let's fix that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8806) libhadoop.so: search java.library.path when calling dlopen

2012-09-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8806:


 Summary: libhadoop.so: search java.library.path when calling dlopen
 Key: HADOOP-8806
 URL: https://issues.apache.org/jira/browse/HADOOP-8806
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Priority: Minor


libhadoop calls {{dlopen}} to load {{libsnappy.so}} and {{libz.so}}.  These 
libraries can be bundled in the {{$HADOOP_ROOT/lib/native}} directory.  For 
example, the {{-Dbundle.snappy}} build option copies {{libsnappy.so}} to this 
directory.  However, snappy can't be loaded from this directory unless 
{{LD_LIBRARY_PATH}} is set to include this directory.

Should we also search {{java.library.path}} when loading these libraries?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8805) Move protocol buffer implementation of GetUserMappingProtocol from HDFS to Common

2012-09-13 Thread Bo Wang (JIRA)
Bo Wang created HADOOP-8805:
---

 Summary: Move protocol buffer implementation of 
GetUserMappingProtocol from HDFS to Common
 Key: HADOOP-8805
 URL: https://issues.apache.org/jira/browse/HADOOP-8805
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Bo Wang
Assignee: Bo Wang


org.apache.hadoop.tools.GetUserMappingProtocol is used in both HDFS and YARN. 
We should move the protocol buffer implementation from HDFS to Common so that 
it can also be used by YARN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: Make Hadoop run more securely in Public Cloud environment

2012-09-13 Thread Kingshuk Chatterjee
Absolutely. And if such byte level security is inbuilt into the product, and 
data access is isolated, which also means hacks can be isolated too, then it 
becomes easier for us to sell the idea to the Hospitals CIOs and CSOs. Let’s 
hear what our folks here have to say about it too. 

Regards//K

-Original Message-
From: Xianqing Yu [mailto:yuxian...@gmail.com] 
Sent: Thursday, September 13, 2012 12:04 PM
To: Kingshuk Chatterjee; common-dev@hadoop.apache.org
Cc: Peng Ning
Subject: Re: Make Hadoop run more securely in Public Cloud environment

Hi Kingshuk,

Thank you for your interesting.

I think you make a very nice example. If Healthcare company push their data to 
public cloud, the byte-level access control can minimize the data every party 
can get (e.g. task process). So even one task process or TaskTracker is hacked, 
the information loss can be minimized.

Another feature is also very help to this scenario. Currently all NameNode and 
DataNodes are sharing the same key to generate Block Access Token. If the 
hacker get the key by hacking any one of HDFS machine, she or he potentially 
can read everything in the HDFS and impact is huge. So I re-design that to make 
sure that, if hacker success to attack one machine, he or she can only get what 
is on this machine, not others in the cluster.

And also secure channel (encrypted channel) to transfer data can be another 
security bonus.

Thanks,

Xianqing

-Original Message-
From: Kingshuk Chatterjee
Sent: Thursday, September 13, 2012 2:23 PM
To: 'Peng Ning'
Cc: yuxian...@gmail.com
Subject: RE: Make Hadoop run more securely in Public Cloud environment

Hi Xianqing -

I am a systems architect and a consultant for Healthcare industry, and the 
first impression I get from your email below is that the byte level security 
can be a very helpful feature in securing patient's health information (PHI), 
and assuring the healthcare service providers to take steps to push their data 
to public cloud.

I will be happy to contribute in anyway, let me know.

Regards//K

Kingshuk Chatterjee
Director, Technology Consulting

5155 Rosecrans Ave, Suite 250   http://www.calance.com
Hawthorne, CA 90250  '  +1-(412 606 8582)

-Original Message-
From: Xianqing Yu [mailto:yuxian...@gmail.com]
Sent: Thursday, September 13, 2012 11:19 AM
To: common-dev@hadoop.apache.org
Cc: Peng Ning
Subject: Make Hadoop run more securely in Public Cloud environment

Hi Hadoop community,

I am a Ph.D student in North Carolina State University. I am modifying the 
Hadoop's code (which including most parts of Hadoop, e.g. JobTracker, 
TaskTracker, NameNode, DataNode) to achieve better security.

My major goal is that make Hadoop running more secure in the Cloud environment, 
especially for public Cloud environment. In order to achieve that, I redesign 
the currently security mechanism and achieve following
proprieties:

1. Bring byte-level access control to Hadoop HDFS. Based on 0.20.204, HDFS 
access control is based on user or block granularity, e.g. HDFS Delegation 
Token only check if the file can be accessed by certain user or not, Block 
Token only proof which block or blocks can be accessed. I make Hadoop can do 
byte-granularity access control, each access party, user or task process can 
only access the bytes she or he least needed.

2. I assume that in the public Cloud environment, only Namenode, secondary 
Namenode, JobTracker can be trusted. A large number of Datanode and TaskTracker 
may be compromised due to some of them may be running under less secure 
environment. So I re-design the secure mechanism to make the damage the hacker 
can do to be minimized.

a. Re-design the Block Access Token to solve wildly shared-key problem of HDFS. 
In original Block Access Token design, all HDFS (Namenode and
Datanode) share one master key to generate Block Access Token, if one DataNode 
is compromised by hacker, the hacker can get the key and generate any  Block 
Access Token he or she want.

b. Re-design the HDFS Delegation Token to do fine-grain access control for 
TaskTracker and Map-Reduce Task process on HDFS.

In the Hadoop 0.20.204, all TaskTrackers can use their kerberos credentials to 
access any files for MapReduce on HDFS. So they have the same privilege as 
JobTracker to do read or write tokens, copy job file, etc.. However, if one of 
them is compromised, every critical thing in MapReduce directory (job file, 
Delegation Token) is exposed to attacker. I solve the problem by making 
JobTracker to decide which TaskTracker can access which file in MapReduce 
Directory on HDFS.

For Task process, once it get HDFS Delegation Token, it can access everything 
belong to this job or user on HDFS. By my design, it can only access the bytes 
it needed from HDFS.

There are some other improvement in the security, such as TaskTracker can not 
know some

Re: Make Hadoop run more securely in Public Cloud environment

2012-09-13 Thread Xianqing Yu

Hi Kingshuk,

Thank you for your interesting.

I think you make a very nice example. If Healthcare company push their data 
to public cloud, the byte-level access control can minimize the data every 
party can get (e.g. task process). So even one task process or TaskTracker 
is hacked, the information loss can be minimized.


Another feature is also very help to this scenario. Currently all NameNode 
and DataNodes are sharing the same key to generate Block Access Token. If 
the hacker get the key by hacking any one of HDFS machine, she or he 
potentially can read everything in the HDFS and impact is huge. So I 
re-design that to make sure that, if hacker success to attack one machine, 
he or she can only get what is on this machine, not others in the cluster.


And also secure channel (encrypted channel) to transfer data can be another 
security bonus.


Thanks,

Xianqing

-Original Message- 
From: Kingshuk Chatterjee

Sent: Thursday, September 13, 2012 2:23 PM
To: 'Peng Ning'
Cc: yuxian...@gmail.com
Subject: RE: Make Hadoop run more securely in Public Cloud environment

Hi Xianqing -

I am a systems architect and a consultant for Healthcare industry, and the 
first impression I get from your email below is that the byte level security 
can be a very helpful feature in securing patient's health information 
(PHI), and assuring the healthcare service providers to take steps to push 
their data to public cloud.


I will be happy to contribute in anyway, let me know.

Regards//K

Kingshuk Chatterjee
Director, Technology Consulting

5155 Rosecrans Ave, Suite 250   http://www.calance.com
Hawthorne, CA 90250    +1-(412 606 8582)

-Original Message-
From: Xianqing Yu [mailto:yuxian...@gmail.com]
Sent: Thursday, September 13, 2012 11:19 AM
To: common-dev@hadoop.apache.org
Cc: Peng Ning
Subject: Make Hadoop run more securely in Public Cloud environment

Hi Hadoop community,

I am a Ph.D student in North Carolina State University. I am modifying the 
Hadoop's code (which including most parts of Hadoop, e.g. JobTracker, 
TaskTracker, NameNode, DataNode) to achieve better security.


My major goal is that make Hadoop running more secure in the Cloud 
environment, especially for public Cloud environment. In order to achieve 
that, I redesign the currently security mechanism and achieve following 
proprieties:


1. Bring byte-level access control to Hadoop HDFS. Based on 0.20.204, HDFS 
access control is based on user or block granularity, e.g. HDFS Delegation 
Token only check if the file can be accessed by certain user or not, Block 
Token only proof which block or blocks can be accessed. I make Hadoop can do 
byte-granularity access control, each access party, user or task process can 
only access the bytes she or he least needed.


2. I assume that in the public Cloud environment, only Namenode, secondary 
Namenode, JobTracker can be trusted. A large number of Datanode and 
TaskTracker may be compromised due to some of them may be running under less 
secure environment. So I re-design the secure mechanism to make the damage 
the hacker can do to be minimized.


a. Re-design the Block Access Token to solve wildly shared-key problem of 
HDFS. In original Block Access Token design, all HDFS (Namenode and 
Datanode) share one master key to generate Block Access Token, if one 
DataNode is compromised by hacker, the hacker can get the key and generate 
any  Block Access Token he or she want.


b. Re-design the HDFS Delegation Token to do fine-grain access control for 
TaskTracker and Map-Reduce Task process on HDFS.


In the Hadoop 0.20.204, all TaskTrackers can use their kerberos credentials 
to access any files for MapReduce on HDFS. So they have the same privilege 
as JobTracker to do read or write tokens, copy job file, etc.. However, if 
one of them is compromised, every critical thing in MapReduce directory (job 
file, Delegation Token) is exposed to attacker. I solve the problem by 
making JobTracker to decide which TaskTracker can access which file in 
MapReduce Directory on HDFS.


For Task process, once it get HDFS Delegation Token, it can access 
everything belong to this job or user on HDFS. By my design, it can only 
access the bytes it needed from HDFS.


There are some other improvement in the security, such as TaskTracker can 
not know some information like blockID from the Block Token (because it is 
encrypted by my way), and HDFS can set up secure channel to send data as a 
option.


By those features, Hadoop can run much securely under uncertain environment 
such as Public Cloud. I already start to test my prototype. I want to know 
that whether community is interesting about my work? Is that a value work to 
contribute to production Hadoop?


I created JIRA for the discussion. 
https://issues.apache.org/jira/browse/HADOOP-8803#comment-13455025


Thanks,

Xianqing



[jira] [Created] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-09-13 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8804:
---

 Summary: Improve Web UIs when the wildcard address is used
 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha, 1.0.0
Reporter: Eli Collins
Priority: Minor


When IPC addresses are bound to the wildcard (ie the default config) the NN, JT 
(and probably RM etc) Web UIs are a little goofy. Eg "0 Hadoop Map/Reduce 
Administration" and "NameNode '0.0.0.0:18021' (active)". Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Make Hadoop run more securely in Public Cloud environment

2012-09-13 Thread Xianqing Yu
Hi Hadoop community,

I am a Ph.D student in North Carolina State University. I am modifying the 
Hadoop's code (which including most parts of Hadoop, e.g. JobTracker, 
TaskTracker, NameNode, DataNode) to achieve better security.

My major goal is that make Hadoop running more secure in the Cloud environment, 
especially for public Cloud environment. In order to achieve that, I redesign 
the currently security mechanism and achieve following proprieties:

1. Bring byte-level access control to Hadoop HDFS. Based on 0.20.204, HDFS 
access control is based on user or block granularity, e.g. HDFS Delegation 
Token only check if the file can be accessed by certain user or not, Block 
Token only proof which block or blocks can be accessed. I make Hadoop can do 
byte-granularity access control, each access party, user or task process can 
only access the bytes she or he least needed.

2. I assume that in the public Cloud environment, only Namenode, secondary 
Namenode, JobTracker can be trusted. A large number of Datanode and TaskTracker 
may be compromised due to some of them may be running under less secure 
environment. So I re-design the secure mechanism to make the damage the hacker 
can do to be minimized.

a. Re-design the Block Access Token to solve wildly shared-key problem of HDFS. 
In original Block Access Token design, all HDFS (Namenode and Datanode) share 
one master key to generate Block Access Token, if one DataNode is compromised 
by hacker, the hacker can get the key and generate any  Block Access Token he 
or she want.

b. Re-design the HDFS Delegation Token to do fine-grain access control for 
TaskTracker and Map-Reduce Task process on HDFS. 

In the Hadoop 0.20.204, all TaskTrackers can use their kerberos credentials to 
access any files for MapReduce on HDFS. So they have the same privilege as 
JobTracker to do read or write tokens, copy job file, etc.. However, if one of 
them is compromised, every critical thing in MapReduce directory (job file, 
Delegation Token) is exposed to attacker. I solve the problem by making 
JobTracker to decide which TaskTracker can access which file in MapReduce 
Directory on HDFS.

For Task process, once it get HDFS Delegation Token, it can access everything 
belong to this job or user on HDFS. By my design, it can only access the bytes 
it needed from HDFS.

There are some other improvement in the security, such as TaskTracker can not 
know some information like blockID from the Block Token (because it is 
encrypted by my way), and HDFS can set up secure channel to send data as a 
option.

By those features, Hadoop can run much securely under uncertain environment 
such as Public Cloud. I already start to test my prototype. I want to know that 
whether community is interesting about my work? Is that a value work to 
contribute to production Hadoop?

I created JIRA for the discussion. 
https://issues.apache.org/jira/browse/HADOOP-8803#comment-13455025 

Thanks,

Xianqing 


[jira] [Created] (HADOOP-8803) Make Hadoop running more secure public cloud envrionment

2012-09-13 Thread Xianqing Yu (JIRA)
Xianqing Yu created HADOOP-8803:
---

 Summary: Make Hadoop running more secure public cloud envrionment
 Key: HADOOP-8803
 URL: https://issues.apache.org/jira/browse/HADOOP-8803
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, ipc, security
Affects Versions: 0.20.204.0
Reporter: Xianqing Yu


I have two major goals in the project.

One is bring fine-grain access control to Hadoop. Based on 0.20.204, Hadoop 
access control is based on user or block granularity, e.g. HDFS Delegation 
Token only check if the file can be accessed by certain user or not, Block 
Token only proof which block or blocks can be accessed. I would like to make 
Hadoop can do byte-granularity access control, each access party, user or 
task process can only access the bytes she or he least needed.

Second one is that make Hadoop work more secure in Cloud environment, 
especially in public Cloud environment. So the communication between 
hadoop's node should be protected. And if some nodes of hadoop is 
compromised, the damage should be minimized (e.g. known wildly shared-key 
problem of Block Access Token problem).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8802) TestUserGroupInformation testcase fails using IBM JDK 6.0 SR11

2012-09-13 Thread Amir Sanjar (JIRA)
Amir Sanjar created HADOOP-8802:
---

 Summary: TestUserGroupInformation testcase fails using IBM JDK 6.0 
SR11
 Key: HADOOP-8802
 URL: https://issues.apache.org/jira/browse/HADOOP-8802
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3
 Environment: Build with IBM JAVA 6sr11 sdk, Lunix RHEL 6.2 64bit, 
x86_64
Reporter: Amir Sanjar
 Fix For: 1.0.3


Testsuite: org.apache.hadoop.security.TestUserGroupInformation
Tests run: 10, Failures: 0, Errors: 1, Time elapsed: 0.264 sec
- Standard Output ---
2012-09-13 10:57:59,771 WARN  conf.Configuration 
(Configuration.java:(192)) - DEPRECATED: hadoop-site.xml found in the 
classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, 
mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, 
mapred-default.xml and hdfs-default.xml respectively
sanjar:sanjar dialout desktop_admin_r
-  ---

Testcase: testGetServerSideGroups took 0.036 sec
Caused an ERROR
expected: but was:
at 
org.apache.hadoop.security.TestUserGroupInformation.testGetServerSideGroups(TestUserGroupInformation.java:108)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8801) ExitUtil#terminate should capture the exception stack trace

2012-09-13 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8801:
---

 Summary: ExitUtil#terminate should capture the exception stack 
trace
 Key: HADOOP-8801
 URL: https://issues.apache.org/jira/browse/HADOOP-8801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hadoop-8801.txt

ExitUtil#terminate(status,Throwable) should capture and log the stack trace of 
the given throwable. This will help debug issues like HDFS-3933.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8800) Dynamic Compress Stream

2012-09-13 Thread yankay (JIRA)
yankay created HADOOP-8800:
--

 Summary: Dynamic Compress Stream
 Key: HADOOP-8800
 URL: https://issues.apache.org/jira/browse/HADOOP-8800
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Affects Versions: 2.0.1-alpha
Reporter: yankay


We use compress in MapReduce in some case because It use CPU to improve IO 
throughput.

But we can only set one compress algorithm in configure file. The hadoop 
cluster is changing every time.  So a compress algorithm may not work well in 
all case. 

Why not provide a algorithm named dynamic. It can change compress level and 
algorithm dynamic based on performance. Like tcp, it starts up slowly, and try 
run faster and faster.

I would write a detail design here, and try to submit a patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8799) commons-lang version mismatch

2012-09-13 Thread Joel Costigliola (JIRA)
Joel Costigliola created HADOOP-8799:


 Summary: commons-lang version mismatch
 Key: HADOOP-8799
 URL: https://issues.apache.org/jira/browse/HADOOP-8799
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Joel Costigliola


hadoop install references commons-lang-2.4.jar while hadoop-core dependency 
references commons-lang:jar:2.6 as shown in maven dependency:tree command 
output extract.

{noformat}
org.apache.hadoop:hadoop-core:jar:1.0.3:provided
+- commons-cli:commons-cli:jar:1.2:provided
+- xmlenc:xmlenc:jar:0.52:provided
+- commons-httpclient:commons-httpclient:jar:3.0.1:provided
+- commons-codec:commons-codec:jar:1.4:provided
+- org.apache.commons:commons-math:jar:2.1:provided
+- commons-configuration:commons-configuration:jar:1.6:provided
|  +- commons-collections:commons-collections:jar:3.2.1:provided
|  +- commons-lang:commons-lang:jar:2.6:provided (version managed from 2.4)
{noformat}

Hadoop install libs should be consistent with hadoop-core maven dependencies.

I found this error because I was using a feature available in commons-lang.2.6 
that was failing when executed in my hadoop cluster (but not with m pigunit 
tests).

A last remark, it would be nice to display the classpath used by hadoop cluster 
while executing a job, because these kinds of errors are not easy to find.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8798) User created during the installation of the .deb Package is not the default from the hadoop-setup-* Scripts

2012-09-13 Thread Ingo Rauschenberg (JIRA)
Ingo Rauschenberg created HADOOP-8798:
-

 Summary: User created during the installation of the .deb Package 
is not the default from the hadoop-setup-* Scripts
 Key: HADOOP-8798
 URL: https://issues.apache.org/jira/browse/HADOOP-8798
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3
 Environment: Debian Squeeze; 64 Bit; Oracle Java 6;
Reporter: Ingo Rauschenberg
Priority: Trivial


In the script "hadoop.preinst" from the .DEB Package where two users createt 
(hdfs and mapred).

In the scripts hadoop-setup-conf.sh and hadoop-setup-hdfs.sh the default value 
for --mapreduce-user is mr.

I think it would be more userfrindly if the preinst script create mr and not 
mapred or if the default value is mapred and not mr.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8797) automatically detect JAVA_HOME on Linux, report native lib path similar to class path

2012-09-13 Thread Gera Shegalov (JIRA)
Gera Shegalov created HADOOP-8797:
-

 Summary: automatically detect JAVA_HOME on Linux, report native 
lib path similar to class path
 Key: HADOOP-8797
 URL: https://issues.apache.org/jira/browse/HADOOP-8797
 Project: Hadoop Common
  Issue Type: Improvement
 Environment: Linux
Reporter: Gera Shegalov
Priority: Trivial


Enhancement 1)
iterate common java locations on Linux starting with Java7 down to Java6

Enhancement 2)
"hadoop jnipath" to print java.library.path similar to "hadoop classpath"


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Common-trunk #532

2012-09-13 Thread Apache Jenkins Server
See 

Changes:

[suresh] HDFS-3703. Datanodes are marked stale if heartbeat is not received in 
configured timeout and are selected as the last location to read from. 
Contributed by Jing Zhao.

[vinodkv] YARN-93. Fixed RM to propagate diagnostics from applications that 
have finished but failed Contributed by Jason Lowe.

[eli] HDFS-3928. MiniDFSCluster should reset the first ExitException on 
shutdown. Contributed by Eli Collins

[eli] HDFS-3902. TestDatanodeBlockScanner#testBlockCorruptionPolicy is broken. 
Contributed by Andy Isaacson

[todd] HDFS-3925. Prettify PipelineAck#toString() for printing to a log. 
Contributed by Andrew Wang.

--
[...truncated 27755 lines...]
[DEBUG]   (s) debug = false
[DEBUG]   (s) effort = Default
[DEBUG]   (s) failOnError = true
[DEBUG]   (s) findbugsXmlOutput = false
[DEBUG]   (s) findbugsXmlOutputDirectory = 

[DEBUG]   (s) fork = true
[DEBUG]   (s) includeTests = false
[DEBUG]   (s) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (s) maxHeap = 512
[DEBUG]   (s) nested = false
[DEBUG]   (s) outputDirectory = 

[DEBUG]   (s) outputEncoding = UTF-8
[DEBUG]   (s) pluginArtifacts = 
[org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, 
com.google.code.findbugs:bcel:jar:1.3.9:compile, 
org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, 
org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, 
org.apache.ant:ant:jar:1.7.1:compile, 
org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, 
org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, 
org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, 
org.sonatype.gshell:gshell-io:jar:2.0:compile, 
com.thoughtworks.qdox:qdox:jar:1.10:compile, 
org.apache.maven.shared:file-management:jar:1.2.1:compile, 
org.apache.maven.shared:maven-shared-io:jar:1.1:compile, 
commons-lang:commons-lang:jar:2.4:compile, 
org.slf4j:slf4j-api:jar:1.5.10:compile, 
org.sonatype.gossip:gossip:jar:1.2:compile, 
org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, 
commons-validator:commons-validator:jar:1.2.0:compile, 
commons-beanutils:commons-beanutils:jar:1.7.0:compile, 
commons-digester:commons-digester:jar:1.6:compile, 
commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, 
xml-apis:xml-apis:jar:1.0.b2:compile, 
org.codehaus.groovy:groovy-all:jar:1.7.4:compile, 
org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, 
org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, 
xerces:xercesImpl:jar:2.9.1:compile, 
commons-httpclient:commons-httpclient:jar:3.1:compile, 
commons-codec:commons-codec:jar:1.2:compile, 
org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, 
org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, 
org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, 
org.apache.velocity:velocity:jar:1.5:compile, 
commons-collections:commons-collections:jar:3.2:compile, 
org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, 
commons-io:commons-io:jar:1.4:compile, 
com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, 
com.google.code.findbugs:findbugs:jar:1.3.9:compile, 
com.google.code.findbugs:jsr305:jar:1.3.9:compile, 
com.google.code.findbugs:jFormatString:jar:1.3.9:compile, 
com.google.code.findbugs:annotations:jar:1.3.9:compile, 
dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, 
jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, 
xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, 
com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, 
asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, 
asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, 
asm:asm-xml:jar:3.1:compile, jgoodies:plastic:jar:1.2.0:compile, 
org.codehaus.plexus:plexus-resources:jar:1.0-alpha-4:compile, 
org.codehaus.plexus:plexus-utils:jar:1.5.1:compile]
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @