[jira] [Created] (HADOOP-10705) Fix namenode-rpc-unit warning reported by memory leak check tool(valgrind)

2014-06-16 Thread Wenwu Peng (JIRA)
Wenwu Peng created HADOOP-10705:
---

 Summary: Fix namenode-rpc-unit  warning reported by memory leak 
check tool(valgrind)
 Key: HADOOP-10705
 URL: https://issues.apache.org/jira/browse/HADOOP-10705
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: HADOOP-10388
Reporter: Wenwu Peng


Rum valgrind to check memory leak for namenode-rpc.unit.
There are many warning, need to fix it.

valgrind --tool=memcheck --leak-check=full --show-reachable=yes 
./namenode-rpc-unit 
==10462== Memcheck, a memory error detector
==10462== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==10462== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==10462== Command: ./namenode-rpc-unit
==10462== 
==10462== HEAP SUMMARY:
==10462== in use at exit: 428 bytes in 12 blocks
==10462==   total heap usage: 91 allocs, 79 frees, 16,056 bytes allocated
==10462== 
==10462== 16 bytes in 1 blocks are indirectly lost in loss record 1 of 12
==10462==at 0x4C2B6CD: malloc (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==10462==by 0x557EB99: __nss_lookup_function (nsswitch.c:456)
==10462==by 0x604863E: ???
==10462==by 0x553744C: getpwuid_r@@GLIBC_2.2.5 (getXXbyYY_r.c:256)
==10462==by 0x42681E: geteuid_string (user.c:67)
==10462==by 0x425ABD: main (namenode-rpc-unit.c:71)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10706) fix some bug related to hrpc_sync_ctx

2014-06-16 Thread Binglin Chang (JIRA)
Binglin Chang created HADOOP-10706:
--

 Summary: fix some bug related to hrpc_sync_ctx
 Key: HADOOP-10706
 URL: https://issues.apache.org/jira/browse/HADOOP-10706
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Binglin Chang
Assignee: Binglin Chang


1. 
{code}
memset(ctx, 0, sizeof(ctx));
 return ctx;
{code}

Doing this will alway make return value to 0

2.
hrpc_release_sync_ctx should changed to hrpc_proxy_release_sync_ctx, all the 
functions in this .h/.c file follow this rule






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10707) support bzip2 in python avro tool

2014-06-16 Thread Eustache (JIRA)
Eustache created HADOOP-10707:
-

 Summary: support bzip2 in python avro tool
 Key: HADOOP-10707
 URL: https://issues.apache.org/jira/browse/HADOOP-10707
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Eustache
Priority: Minor


The Python tool to decode avro files is currently missing support for bzip2 
compression.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10708) support bzip2 in python avro tool

2014-06-16 Thread Eustache (JIRA)
Eustache created HADOOP-10708:
-

 Summary: support bzip2 in python avro tool
 Key: HADOOP-10708
 URL: https://issues.apache.org/jira/browse/HADOOP-10708
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Eustache
Priority: Minor


The Python tool to decode avro files is currently missing support for bzip2 
compression.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10708) support bzip2 in python avro tool

2014-06-16 Thread Eustache (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eustache resolved HADOOP-10708.
---

Resolution: Invalid

wrong component

 support bzip2 in python avro tool
 -

 Key: HADOOP-10708
 URL: https://issues.apache.org/jira/browse/HADOOP-10708
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Eustache
Priority: Minor
  Labels: avro

 The Python tool to decode avro files is currently missing support for bzip2 
 compression.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[VOTE] Release Apache Hadoop 2.4.1

2014-06-16 Thread Arun C Murthy
Folks,

I've created a release candidate (rc0) for hadoop-2.4.1 (bug-fix release) that 
I would like to push out.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.4.1-rc0
The RC tag in svn is here: 
https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.1-rc0

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun



--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/hdp/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-10709) Reuse Filters across web apps

2014-06-16 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10709:
-

 Summary: Reuse Filters across web apps
 Key: HADOOP-10709
 URL: https://issues.apache.org/jira/browse/HADOOP-10709
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony


Currently, we need to define separate authentication filters for webhdfs and 
general webui. This also involves defining parameters for those filters.

It will be better if one could reuse filters for web apps if desired. 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10710) hadoop.auth cookie is not properly constructed according to RFC2109

2014-06-16 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10710:
---

 Summary: hadoop.auth cookie is not properly constructed according 
to RFC2109
 Key: HADOOP-10710
 URL: https://issues.apache.org/jira/browse/HADOOP-10710
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur


It seems that HADOOP-10379 introduced a bug on how hadoop.auth cookies are 
being constructed.

Before HADOOP-10379, cookies were constructed using Servlet's {{Cookie}} class 
and corresponding {{HttpServletResponse}} methods. This was taking care of 
setting attributes like 'Version=1' and double-quoting the cookie value if 
necessary.

HADOOP-10379 changed the Cookie creation to use a {{StringBuillder}} and 
setting values and attributes by hand. This is not taking care of setting 
required attributes like Version and escaping the cookie value.

While this is not breaking HadoopAuth {{AuthenticatedURL}} access, it is 
breaking access done using {{HtttpClient}}. I.e. Solr uses HttpClient and its 
access is broken since this change.

It seems that HADOOP-10379 main objective was to set the 'secure' attribute. 
Note this can be done using the {{Cookie}} API.

We should revert the cookie creation logic to use the {{Cookie}} API and take 
care of the security flag via {{setSecure(boolean)}}.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10711) Cleanup some extra dependencies from hadoop-auth

2014-06-16 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-10711:
--

 Summary: Cleanup some extra dependencies from hadoop-auth
 Key: HADOOP-10711
 URL: https://issues.apache.org/jira/browse/HADOOP-10711
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Robert Kanter
Assignee: Robert Kanter


HADOOP-10322 added {{apacheds-kerberos-codec}} as a dependency, which brought 
in some additional dependencies.  
{noformat}
[INFO] \- 
org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15:compile
[INFO]+- org.apache.directory.server:apacheds-i18n:jar:2.0.0-M15:compile
[INFO]+- org.apache.directory.api:api-asn1-api:jar:1.0.0-M20:compile
[INFO]+- org.apache.directory.api:api-asn1-ber:jar:1.0.0-M20:compile
[INFO]+- org.apache.directory.api:api-i18n:jar:1.0.0-M20:compile
[INFO]+- org.apache.directory.api:api-ldap-model:jar:1.0.0-M20:compile
[INFO]|  +- org.apache.mina:mina-core:jar:2.0.0-M5:compile
[INFO]|  +- antlr:antlr:jar:2.7.7:compile
[INFO]|  +- commons-lang:commons-lang:jar:2.6:compile
[INFO]|  \- commons-collections:commons-collections:jar:3.2.1:compile
[INFO]+- org.apache.directory.api:api-util:jar:1.0.0-M20:compile
[INFO]\- net.sf.ehcache:ehcache-core:jar:2.4.4:compile
{noformat}
It looks like we don't need most of them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10712) Add support for accessing the NFS gateway from the AIX NFS client

2014-06-16 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-10712:
---

 Summary: Add support for accessing the NFS gateway from the AIX 
NFS client
 Key: HADOOP-10712
 URL: https://issues.apache.org/jira/browse/HADOOP-10712
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


We've identified two issues when trying to access the HDFS NFS Gateway from an 
AIX NFS client:

# In the case of COMMITs, the AIX NFS client will always send 4096, or a 
multiple of the page size, for the offset to be committed, even if fewer bytes 
than this have ever, or will ever, be written to the file. This will cause a 
write to a file from the AIX NFS client to hang on close unless the size of 
that file is a multiple of 4096.
# In the case of READDIR and READDIRPLUS, the AIX NFS client will send the same 
cookie verifier for a given directory seemingly forever after that directory is 
first accessed over NFS, instead of getting a new cookie verifier for every set 
of incremental readdir calls. This means that if a directory's mtime ever 
changes, the FS must be unmounted/remounted before readdir calls on that dir 
from AIX will ever succeed again.

From my interpretation of RFC-1813, the NFS Gateway is in fact doing the 
correct thing in both cases, but we can introduce simple changes on the NFS 
Gateway side to be able to optionally work around these incompatibilities.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10706) Fix initialization of hrpc_sync_ctx

2014-06-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10706.
---

  Resolution: Fixed
   Fix Version/s: HADOOP-10388
Target Version/s: HADOOP-10388

 Fix initialization of hrpc_sync_ctx
 ---

 Key: HADOOP-10706
 URL: https://issues.apache.org/jira/browse/HADOOP-10706
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Binglin Chang
Assignee: Binglin Chang
 Fix For: HADOOP-10388

 Attachments: HADOOP-10706.v1.patch


 1. 
 {code}
 memset(ctx, 0, sizeof(ctx));
  return ctx;
 {code}
 Doing this will alway make return value to 0
 2.
 hrpc_release_sync_ctx should changed to hrpc_proxy_release_sync_ctx, all the 
 functions in this .h/.c file follow this rule



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10713) Document thread-safety of CryptoCodec#generateSecureRandom

2014-06-16 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-10713:


 Summary: Document thread-safety of CryptoCodec#generateSecureRandom
 Key: HADOOP-10713
 URL: https://issues.apache.org/jira/browse/HADOOP-10713
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial


Random implementations have to deal with thread-safety; this should be 
specified in the javadoc so implementors know to do this for CryptoCodec 
subclasses.



--
This message was sent by Atlassian JIRA
(v6.2#6252)