[jira] [Created] (HADOOP-11503) HDFS Space Quota not working as expected

2015-01-22 Thread Puttaswamy (JIRA)
Puttaswamy created HADOOP-11503:
---

 Summary: HDFS Space Quota not working as expected
 Key: HADOOP-11503
 URL: https://issues.apache.org/jira/browse/HADOOP-11503
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
 Environment: CDH4.6
Reporter: Puttaswamy


I am implementing hdfs quota in a cdh4.6 cluster .Hdfs name quota has been 
working properly.But the Hdfs Space quota has not been working as expected.i.e,

I set the space quota of 500MB for a directory say /test-space-quota.

Then i put a file of 10 Mb into /test-space-quota which worked .Now the space 
available is 480 MB ( 500 - 10*2) where 2 is rep factor.

Then i put a file of 50Mb into /test-space-quota which worked too as expected. 
Now the space available is 380 MB (480 - 50*2)

I am checking the quota left from the command hadoop fs -count -q 
/test-space-quota

Then i tried to put a file of 100 Mb . It should since it will just consume 200 
Mb of space with replication. But when i put that i got an error 

DataStreamer Exception
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
of /test is exceeded: quota = 524288000 B = 500 MB but diskspace consumed = 
662700032 B = 632 MB

But the quota says

hadoop fs -count -q /test-space-quota
none inf   524288000   3984588801   
 2   62914560 /test-space-quota

Could you please help on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #83

2015-01-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/83/changes

Changes:

[aw] HADOOP-11256. Some site docs have inconsistent appearance (Masatake 
Iwasaki via aw)

[jlowe] HADOOP-11327. BloomFilter#not() omits the last bit, resulting in an 
incorrect filter. Contributed by Eric Payne

[cmccabe] HADOOP-11484. hadoop-mapreduce-client-nativetask fails to build on 
ARM AARCH64 due to x86 asm statements (Edward Nevill via Colin P. McCabe)

[cmccabe] HADOOP-11484: move CHANGES.txt entry to 3.0

[szetszwo] HDFS-3443. Fix NPE when namenode transition to active during startup 
by adding checkNNStartup() in NameNodeRpcServer.  Contributed by Vinayakumar B

[cnauroth] HADOOP-10668. Addendum patch to fix TestZKFailoverController. 
Contributed by Ming Ma.

[kihwal] HDFS-7548. Corrupt block reporting delayed until datablock scanner 
thread detects it. Contributed by Rushabh Shah.

[cnauroth] MAPREDUCE-3283. mapred classpath CLI does not display the complete 
classpath. Contributed by Varun Saxena.

[shv] HADOOP-11490. Expose truncate API via FileSystem and shell command. 
Contributed by Milan Desai.

[cmccabe] HADOOP-11466. FastByteComparisons: do not use UNSAFE_COMPARER on the 
SPARC architecture because it is slower there (Suman Somasundar via Colin P.  
McCabe)

[gera] MAPREDUCE-5785. Derive heap size or mapreduce.*.memory.mb automatically. 
(Gera Shegalov and Karthik Kambatla via gera)

[cmccabe] HDFS-7430. Refactor the BlockScanner to use O(1) memory and use 
multiple threads (cmccabe)

[ozawa] YARN-3078. LogCLIHelpers lacks of a blank space before string 'does not 
exist'. Contributed by Sam Liu.

[ozawa] HADOOP-11209. Configuration#updatingResource/finalParameters are not 
thread-safe. Contributed by Varun Saxena.

--
[...truncated 5169 lines...]
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.516 sec - in 
org.apache.hadoop.io.nativeio.TestNativeIO
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritable
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.293 sec - in 
org.apache.hadoop.io.TestWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestIOUtils
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.378 sec - in 
org.apache.hadoop.io.TestIOUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.146 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.io.TestMD5Hash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapFile
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.966 sec - in 
org.apache.hadoop.io.TestMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.151 sec - in 
org.apache.hadoop.io.TestWritableName
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in 
org.apache.hadoop.io.TestSortedMapWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.651 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.878 sec - in 
org.apache.hadoop.io.retry.TestFailoverProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.173 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.325 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 

Build failed in Jenkins: Hadoop-Common-trunk #1383

2015-01-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1383/changes

Changes:

[aw] HADOOP-11256. Some site docs have inconsistent appearance (Masatake 
Iwasaki via aw)

[jlowe] HADOOP-11327. BloomFilter#not() omits the last bit, resulting in an 
incorrect filter. Contributed by Eric Payne

[cmccabe] HADOOP-11484. hadoop-mapreduce-client-nativetask fails to build on 
ARM AARCH64 due to x86 asm statements (Edward Nevill via Colin P. McCabe)

[cmccabe] HADOOP-11484: move CHANGES.txt entry to 3.0

[szetszwo] HDFS-3443. Fix NPE when namenode transition to active during startup 
by adding checkNNStartup() in NameNodeRpcServer.  Contributed by Vinayakumar B

[cnauroth] HADOOP-10668. Addendum patch to fix TestZKFailoverController. 
Contributed by Ming Ma.

[kihwal] HDFS-7548. Corrupt block reporting delayed until datablock scanner 
thread detects it. Contributed by Rushabh Shah.

[cnauroth] MAPREDUCE-3283. mapred classpath CLI does not display the complete 
classpath. Contributed by Varun Saxena.

[shv] HADOOP-11490. Expose truncate API via FileSystem and shell command. 
Contributed by Milan Desai.

[cmccabe] HADOOP-11466. FastByteComparisons: do not use UNSAFE_COMPARER on the 
SPARC architecture because it is slower there (Suman Somasundar via Colin P.  
McCabe)

[gera] MAPREDUCE-5785. Derive heap size or mapreduce.*.memory.mb automatically. 
(Gera Shegalov and Karthik Kambatla via gera)

[cmccabe] HDFS-7430. Refactor the BlockScanner to use O(1) memory and use 
multiple threads (cmccabe)

[ozawa] YARN-3078. LogCLIHelpers lacks of a blank space before string 'does not 
exist'. Contributed by Sam Liu.

[ozawa] HADOOP-11209. Configuration#updatingResource/finalParameters are not 
thread-safe. Contributed by Varun Saxena.

--
[...truncated 4834 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.852 sec - in 
org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.bloom.TestBloomFilters
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.481 sec - in 
org.apache.hadoop.util.bloom.TestBloomFilters
Running org.apache.hadoop.util.TestStopWatch
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in 
org.apache.hadoop.util.TestStopWatch
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.219 sec - 
in org.apache.hadoop.util.TestMachineList
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.326 sec - in 
org.apache.hadoop.util.TestGenericsUtil
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec - in 
org.apache.hadoop.util.TestPureJavaCrc32
Running org.apache.hadoop.util.TestStringUtils
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.299 sec - in 
org.apache.hadoop.util.TestStringUtils
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec - in 
org.apache.hadoop.util.TestProtoUtil
Running org.apache.hadoop.util.TestSignalLogger
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.util.TestSignalLogger
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.236 sec - in 
org.apache.hadoop.util.TestDiskChecker
Running org.apache.hadoop.util.TestShutdownThreadsHelper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.171 sec - in 
org.apache.hadoop.util.TestShutdownThreadsHelper
Running org.apache.hadoop.util.TestCacheableIPList
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.279 sec - in 
org.apache.hadoop.util.TestCacheableIPList
Running org.apache.hadoop.util.TestLineReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in 
org.apache.hadoop.util.TestLineReader
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestClasspath
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec - in 
org.apache.hadoop.util.TestClasspath
Running org.apache.hadoop.util.TestApplicationClassLoader
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.234 sec - in 
org.apache.hadoop.util.TestApplicationClassLoader
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.231 sec - in 

[jira] [Created] (HADOOP-11504) YARN REST API 2.6 - can't submit simple job in hortonworks-allways job failes to run

2015-01-22 Thread Michael Br (JIRA)
Michael Br created HADOOP-11504:
---

 Summary: YARN REST API 2.6 - can't submit simple job in 
hortonworks-allways job failes to run
 Key: HADOOP-11504
 URL: https://issues.apache.org/jira/browse/HADOOP-11504
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
 Environment: Using eclipse on windows 7 (client)to run the map reduce 
job on the host of Hortonworks HDP 2.2 (hortonworks is on vmware version 6.0.2 
build-1744117)
Reporter: Michael Br
Priority: Minor


Hello,
1.  I want to run the simple Map Reduce job example (with the REST API 2.6 
for yarn applications) and to calculate PI… for now it doesn’t work.

When I use the command in the hortonworks terminal it works: “hadoop jar 
/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0.2.2.0.0-2041.jar
 pi 10 10”.

But I want to submit the job with the REST API and not in the terminal as a 
command line. 
[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application]

2.  I do succeed with other REST API requests: get state, get new 
application id and even kill(change state), but when I try to submit my 
example, the response is:

--
--
The Response Header:
Key : null ,Value : [HTTP/1.1 202 Accepted]
Key : Date ,Value : [Thu, 22 Jan 2015 07:47:24 GMT, Thu, 22 Jan 2015 07:47:24 
GMT]
Key : Content-Length ,Value : [0]
Key : Expires ,Value : [Thu, 22 Jan 2015 07:47:24 GMT, Thu, 22 Jan 2015 
07:47:24 GMT]
Key : Location ,Value : 
[http://192.168.38.133:8088/ws/v1/cluster/apps/application_1421661392788_0038]
Key : Content-Type ,Value : [application/json]
Key : Server ,Value : [Jetty(6.1.26.hwx)]
Key : Pragma ,Value : [no-cache, no-cache]
Key : Cache-Control ,Value : [no-cache]

The Respone Body:
Null (No Response)
--
--
3.  I need help with the http request body filling. I am doing a PUT http 
request and I know that I am doing it right (in java).

4.  I think the problem is in the request body.

5.  I used this guy’s answer to help me build my map reduce example xml but 
it does not work: 
[http://hadoop-forum.org/forum/general-hadoop-discussion/miscellaneous/2136-how-can-i-run-mapreduce-job-by-rest-api].

6.  What am I missing? (the description is not clear to me in the submit 
section of the rest api 2.6)

7.  Does someone have an xml example for using a simple MR job?

8.  Thanks! Here is the XML file I am using for the request body:
--
--
?xml version=1.0 encoding=UTF-8 standalone=yes?
application-submission-context
application-idapplication_1421661392788_0038/application-id
application-nametest_21_1/application-name  
queuedefault/queue
priority3/priority
am-container-spec  
environment   
entry   
keyCLASSPATH/key

value/usr/hdp/2.2.0.0-2041/hadoop/conflt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop/lib/*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop/.//*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-hdfs/./lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-hdfs/lib/*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-hdfs/.//*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-yarn/lib/*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-yarn/.//*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/lib/*lt;CPSgt;/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/.//*lt;CPSgt;lt;CPSgt;/usr/share/java/mysql-connector-java-5.1.17.jarlt;CPSgt;/usr/share/java/mysql-connector-java.jarlt;CPSgt;/usr/hdp/current/hadoop-mapreduce-client/*lt;CPSgt;/usr/hdp/current/tez-client/*lt;CPSgt;/usr/hdp/current/tez-client/lib/*lt;CPSgt;/etc/tez/conf/lt;CPSgt;/usr/hdp/2.2.0.0-2041/tez/*lt;CPSgt;/usr/hdp/2.2.0.0-2041/tez/lib/*lt;CPSgt;/etc/tez/conf/value
/entry
/environment
commands
commandhadoop jar 
/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0.2.2.0.0-2041.jar
 pi 10 10/command
/commands
/am-container-spec
unmanaged-AMfalse/unmanaged-AM
max-app-attempts2/max-app-attempts
resource  
memory1024/memory
vCores1/vCores
/resource
application-typeMAPREDUCE/application-type

keep-containers-across-application-attemptsfalse/keep-containers-across-application-attempts
application-tags  
tagMichael/tag  
tagPI example/tag
/application-tags
/application-submission-context

[jira] [Resolved] (HADOOP-11008) Remove duplicated description about proxy-user in site documents

2015-01-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11008.
---
   Resolution: Fixed
Fix Version/s: 2.7.0

+1 committed to trunk and branch-2

Thanks!!

 Remove duplicated description about proxy-user in site documents
 

 Key: HADOOP-11008
 URL: https://issues.apache.org/jira/browse/HADOOP-11008
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11008-0.patch, HADOOP-11008.1.patch


 The one should be pointer to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11506) Configuration.get() is unnecessarily slow

2015-01-22 Thread Dmitriy V. Ryaboy (JIRA)
Dmitriy V. Ryaboy created HADOOP-11506:
--

 Summary: Configuration.get() is unnecessarily slow
 Key: HADOOP-11506
 URL: https://issues.apache.org/jira/browse/HADOOP-11506
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Dmitriy V. Ryaboy


Profiling several large Hadoop jobs, we discovered that a surprising amount of 
time was spent inside Configuration.get, more specifically, in regex matching 
caused by the substituteVars call.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: AARCH64 build broken

2015-01-22 Thread Edward Nevill
On 21 January 2015 at 11:42, Edward Nevill edward.nev...@linaro.org wrote:

 Hi,

 Hadoop currently does not build on ARM AARCH64. I have raised a JIRA issue
 with a patch.

 https://issues.apache.org/jira/browse/HADOOP-11484

 I have submitted the patch and it builds OK and passes all the core tests.


Hi Colin,

Thanks for pushing this patch.  Steve Loughran raised the issue in the card
that although this patch fixes the ARM issue it does nothing for other
archs.

I would be happy to prepare a patch which makes it downgrade to C code on
other CPU families if this would be useful.

The general format would be

#idef __aarch64__
   __asm__(ARM Asm)
#elif defined(??X86??)
   __asm__(X86 Asm)
#else
   C Implementation
#endif

My question is what to put for the defined(??X86??)

According to the following page

http://nadeausoftware.com/articles/2012/02/c_c_tip_how_detect_processor_type_using_compiler_predefined_macros

the only way to fully detect all x86 variants is to write

#if defined(__x86_64__) || defined(_M_X64) || defined(__i386) ||
defined(_M_IX86)

will detect all variants of 32 and 64 bit x86 across gcc and windows.

Interestingly the bswap64 inline function in primitives.h has the following

#ifdef __X64
  __asm__(rev );
#else
  C implementation
#endif

However if I compile Hadoop on my 64 bit Red Hat Enterprise Linux system it
actually compiles the C implementation (I have verified this by putting a
#error at the start of the C implementation. This is because the correct
macro to detect 64 bit x86 on gcc is __x86_64__ I had also thought that the
macro for windows was _M_X64 not __X64 but maybe __X64 works just as well
on windows? Perhaps someone with access to windows development platform
could do some tests and tell us what macros actually work.

Another question is whether we actually care about 32 bit platforms, or can
they just all downgrade to C code. Does anyone actually build Hadoop on a
32 bit platform?

Another thing to be aware of is that there are endian dependncies in
primitives.h, for example in fmemcmp() just a bit further down is the line

return (int64_t)bswap(*(uint32_t*)src) -
(int64_t)bswap(*(uint32_t*)dest);

This is little endian dependant so will work on the likes of X86 and ARM
but will fail on Sparc. Note, I haven't trawled looking for endian
dependancies but this was one I just spotted while looking at the aarch64
non compilation issue.

All the best,
Ed.


[jira] [Created] (HADOOP-11505) hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some cases

2015-01-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11505:
-

 Summary: hadoop-mapreduce-client-nativetask fails to use x86 
optimizations in some cases
 Key: HADOOP-11505
 URL: https://issues.apache.org/jira/browse/HADOOP-11505
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
code is incorrect.  Thanks to Edward Nevill for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)