[jira] [Created] (HADOOP-11046) Normalize ACL configuration property across hadoop components

2014-09-02 Thread Arun Suresh (JIRA)
Arun Suresh created HADOOP-11046:


 Summary: Normalize ACL configuration property across hadoop 
components
 Key: HADOOP-11046
 URL: https://issues.apache.org/jira/browse/HADOOP-11046
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arun Suresh
Assignee: Arun Suresh


Service authorization policies has different naming convention for ACLS and 
blacklists compared to KMS ACLS.

*Sample Service authorization ACL entry keys*
{noformat}
security.refresh.user.mappings.protocol.acl 
security.refresh.user.mappings.protocol.acl.blocked (blacklists)
{noformat}
*Sample KMS ACL entry keys*
{noformat}
hadoop.kms.acl.CREATE 
hadoop.kms.blacklist.CREATE
{noformat}
We need to follow a uniform naming scheme for both types of ACLs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-0.23-Build #1060

2014-09-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1060/

--
[...truncated 8263 lines...]
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.489 sec
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.66 sec
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec
Running org.apache.hadoop.io.file.tfile.TestTFileComparators
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.534 sec
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.5 sec
Running org.apache.hadoop.io.TestIOUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.492 sec
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.411 sec
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.542 sec
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec
Running org.apache.hadoop.io.TestArrayFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.138 sec
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.467 sec
Running org.apache.hadoop.util.TestOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec
Running org.apache.hadoop.util.TestStringUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.181 sec
Running org.apache.hadoop.util.TestAsyncDiskService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.565 sec
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.686 sec
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.294 sec
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.263 sec
Running org.apache.hadoop.util.TestJarFinder
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.752 sec
Running org.apache.hadoop.util.TestRunJar
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.11 sec
Running org.apache.hadoop.util.TestHostsFileReader
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec
Running org.apache.hadoop.cli.TestCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.929 sec
Running org.apache.hadoop.fs.TestLocalDirAllocator
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.989 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextSymlink
Tests run: 61, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.423 sec
Running org.apache.hadoop.fs.TestFsShellCopy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.307 sec
Running org.apache.hadoop.fs.TestAvroFSInput
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.515 sec
Running org.apache.hadoop.fs.TestLocal_S3FileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.303 sec
Running org.apache.hadoop.fs.TestFilterFs
Tests run: 1, Failures: 

migrating private branches to the new git repo

2014-09-02 Thread Steve Loughran
Now that hadoop is using git, I'm migrating my various work-in-progress
branches to the new commit tree


1. This is the process I've written up for using git format-patch then git
am to export the patch sequence and merge it in, then rebasing onto trunk
to finally get in sync

https://wiki.apache.org/hadoop/MigratingPrivateGitBranches

2. The Git and hadoop docs cover git graft:
https://wiki.apache.org/hadoop/GitAndHadoop#Grafts_for_complete_project_history

I'm not sure if/how that relates

Is there any easier way than what I've described for doing the move?

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: migrating private branches to the new git repo

2014-09-02 Thread Andrew Wang
This is basically what I did, make patches of each of my branches and then
reapply to the new trunk. One small recommendation would be to make the
remote named apache rather than asflive so it's consistent with the
GitAndHadoop wikipage. IMO naming branches with a / (e.g. live/trunk)
is also kind of ambiguous, since it's the same syntax used to specify a
remote. It seems there can also be difficulties with directory and
filenames.

Somewhat related, it'd be nice to update the GitAndHadoop instructions on
how to generate a patch using git-format-patch. I've been using plain old
git diff for a while, but format-patch seems better. It'd be especially
nice if a recommended .gitconfig section was made available :)

I plan to play with format-patch some in the near future and might do this
myself, but if any git gurus already have this ready to go, feel free to
edit.


On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran ste...@hortonworks.com
wrote:

 Now that hadoop is using git, I'm migrating my various work-in-progress
 branches to the new commit tree


 1. This is the process I've written up for using git format-patch then git
 am to export the patch sequence and merge it in, then rebasing onto trunk
 to finally get in sync

 https://wiki.apache.org/hadoop/MigratingPrivateGitBranches

 2. The Git and hadoop docs cover git graft:

 https://wiki.apache.org/hadoop/GitAndHadoop#Grafts_for_complete_project_history

 I'm not sure if/how that relates

 Is there any easier way than what I've described for doing the move?

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



[jira] [Created] (HADOOP-11047) tomcat.download.url is mostly ignored by kms

2014-09-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11047:
-

 Summary: tomcat.download.url is mostly ignored by kms
 Key: HADOOP-11047
 URL: https://issues.apache.org/jira/browse/HADOOP-11047
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


Specificing tomcat.download.url as a maven property only works if the 
tomcat.version matches the one in the URL. This is different than how 
hadoop-auth works, resulting in unexpected build breaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11047) tomcat.download.url is mostly ignored by kms

2014-09-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11047.
---
Resolution: Not a Problem

Looks like it was always this way, but because I was using the same version, it 
worked.  

 tomcat.download.url is mostly ignored by kms
 

 Key: HADOOP-11047
 URL: https://issues.apache.org/jira/browse/HADOOP-11047
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer

 Specifing tomcat.download.url as a maven property only works if the 
 tomcat.version matches the one in the URL. This is different than how it 
 previously worked, resulting in unexpected build breaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: migrating private branches to the new git repo

2014-09-02 Thread Steve Loughran
On 2 September 2014 19:01, Andrew Wang andrew.w...@cloudera.com wrote:

 This is basically what I did, make patches of each of my branches and then
 reapply to the new trunk. One small recommendation would be to make the
 remote named apache rather than asflive so it's consistent with the
 GitAndHadoop wikipage. IMO naming branches with a / (e.g. live/trunk)
 is also kind of ambiguous, since it's the same syntax used to specify a
 remote. It seems there can also be difficulties with directory and
 filenames.


once done you can rename stuff.

I use the hierarchy structure as
 -git flow does it (e.g feature/JIRA-557)
 -you can isolate your work (stevel/ ) from things you've pulled in
(incoming/aw

you do have to keep your subdir name different from your remote name though
or it does get confused, as you point out.

Maybe at the end add the action I will ultimately take (git branch remote
rename asflive origin  git branch rename live/trunk trunk)



 Somewhat related, it'd be nice to update the GitAndHadoop instructions on
 how to generate a patch using git-format-patch. I've been using plain old
 git diff for a while, but format-patch seems better. It'd be especially
 nice if a recommended .gitconfig section was made available :)

 I plan to play with format-patch some in the near future and might do this
 myself, but if any git gurus already have this ready to go, feel free to
 edit.


Does the patch submit code take it?





 On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran ste...@hortonworks.com
 wrote:

  Now that hadoop is using git, I'm migrating my various work-in-progress
  branches to the new commit tree
 
 
  1. This is the process I've written up for using git format-patch then
 git
  am to export the patch sequence and merge it in, then rebasing onto trunk
  to finally get in sync
 
  https://wiki.apache.org/hadoop/MigratingPrivateGitBranches
 
  2. The Git and hadoop docs cover git graft:
 
 
 https://wiki.apache.org/hadoop/GitAndHadoop#Grafts_for_complete_project_history
 
  I'm not sure if/how that relates
 
  Is there any easier way than what I've described for doing the move?
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: migrating private branches to the new git repo

2014-09-02 Thread Steve Loughran
I've now done my first commits; one into trunk (10373), one into branch-2
and cherry picked (fix in
hadoop-common-project/hadoop-common/src/main/native/README ; no JIRA).

I made an initial attempt to cherry pick the HADOOP-10373 patch from trunk
into branch-2, with CHANGES.TXT being a dramatic enough change that it
takes human intervention to patch.

implication


   1. committing to branch-2 with changes.txt in the same commit followed
   by a cherry pick forwards works.
   2. committing to trunk only backports reliably if the changes.txt files
   are patched in a separate commit

This is no different from SVN, except that an svn merge used different
commands.

I have not tried the git format-patch/git am option, which would be:


   1. -use git am -3 to apply the patch to the HEAD of both branch-2 and
   trunk
   2. -patch changes.txt in each branch, then either commit separately
   3. -or try and amend latest commit for the patches

#3 seems appealing, but it'd make the diff on the two branches different.



On 2 September 2014 19:01, Andrew Wang andrew.w...@cloudera.com wrote:

 This is basically what I did, make patches of each of my branches and then
 reapply to the new trunk. One small recommendation would be to make the
 remote named apache rather than asflive so it's consistent with the
 GitAndHadoop wikipage. IMO naming branches with a / (e.g. live/trunk)
 is also kind of ambiguous, since it's the same syntax used to specify a
 remote. It seems there can also be difficulties with directory and
 filenames.

 Somewhat related, it'd be nice to update the GitAndHadoop instructions on
 how to generate a patch using git-format-patch. I've been using plain old
 git diff for a while, but format-patch seems better. It'd be especially
 nice if a recommended .gitconfig section was made available :)

 I plan to play with format-patch some in the near future and might do this
 myself, but if any git gurus already have this ready to go, feel free to
 edit.


 On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran ste...@hortonworks.com
 wrote:

  Now that hadoop is using git, I'm migrating my various work-in-progress
  branches to the new commit tree
 
 
  1. This is the process I've written up for using git format-patch then
 git
  am to export the patch sequence and merge it in, then rebasing onto trunk
  to finally get in sync
 
  https://wiki.apache.org/hadoop/MigratingPrivateGitBranches
 
  2. The Git and hadoop docs cover git graft:
 
 
 https://wiki.apache.org/hadoop/GitAndHadoop#Grafts_for_complete_project_history
 
  I'm not sure if/how that relates
 
  Is there any easier way than what I've described for doing the move?
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-02 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-11048:


 Summary: user/custom LogManager fails to load if the client 
classloader is enabled
 Key: HADOOP-11048
 URL: https://issues.apache.org/jira/browse/HADOOP-11048
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee


If the client classloader is enabled (HADOOP-10893) and you happen to use a 
user-provided log manager via -Djava.util.logging.manager, it fails to load the 
custom log manager:

{noformat}
Could not load Logmanager org.foo.LogManager
java.lang.ClassNotFoundException: org.foo.LogManager
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.util.logging.LogManager$1.run(LogManager.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.clinit(LogManager.java:181)
at java.util.logging.Logger.demandLogger(Logger.java:339)
at java.util.logging.Logger.getLogger(Logger.java:393)
at 
com.google.common.collect.MapMakerInternalMap.clinit(MapMakerInternalMap.java:136)
at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
at 
com.google.common.collect.Interners$CustomInterner.init(Interners.java:59)
at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
at org.apache.hadoop.util.StringInterner.clinit(StringInterner.java:49)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
{noformat}

This is caused because Configuration.loadResources() is invoked before the 
client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11049) javax package system class default is too broad

2014-09-02 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-11049:


 Summary: javax package system class default is too broad
 Key: HADOOP-11049
 URL: https://issues.apache.org/jira/browse/HADOOP-11049
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor


The system class default defined in ApplicationClassLoader has javax.. This 
is too broad. The intent of the system classes is to exempt classes that are 
provided by the JDK along with hadoop and minimally necessary dependencies that 
are guaranteed to be on the system classpath. javax. is too broad for that.

For example, JSR-330 which is part of JavaEE (not JavaSE) has javax.inject. 
Packages like them should not be declared as system classes, as they will 
result in ClassNotFoundException if they are needed and present on the user 
classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11050) hconf.c: fix bug where we would sometimes not try to load multiple XML files from the same path

2014-09-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11050:
-

 Summary: hconf.c: fix bug where we would sometimes not try to load 
multiple XML files from the same path
 Key: HADOOP-11050
 URL: https://issues.apache.org/jira/browse/HADOOP-11050
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


hconf.c: fix bug where we would sometimes not try to load multiple XML files 
from the same path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11051) implement ndfs_get_hosts

2014-09-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11051:
-

 Summary: implement ndfs_get_hosts
 Key: HADOOP-11051
 URL: https://issues.apache.org/jira/browse/HADOOP-11051
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Implement ndfs_get_hosts, the hadoop native client version of getHosts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: migrating private branches to the new git repo

2014-09-02 Thread Andrew Wang

 Maybe at the end add the action I will ultimately take (git branch remote
 rename asflive origin  git branch rename live/trunk trunk)

 +1, that'd be great


  Somewhat related, it'd be nice to update the GitAndHadoop instructions on
  how to generate a patch using git-format-patch. I've been using plain old
  git diff for a while, but format-patch seems better. It'd be especially
  nice if a recommended .gitconfig section was made available :)
 
  I plan to play with format-patch some in the near future and might do
 this
  myself, but if any git gurus already have this ready to go, feel free to
  edit.
 

 Does the patch submit code take it?

 I've seen a few patches from Colin generated via format-patch, so I think
it takes.


Re: migrating private branches to the new git repo

2014-09-02 Thread Andrew Wang
Not to derail the conversation, but if CHANGES.txt is making backports more
annoying, why don't we get rid of it? It seems like we should be able to
generate it via a JIRA query, and git log can also be used for a quick
check (way faster than svn log).


On Tue, Sep 2, 2014 at 12:38 PM, Steve Loughran ste...@hortonworks.com
wrote:

 I've now done my first commits; one into trunk (10373), one into branch-2
 and cherry picked (fix in
 hadoop-common-project/hadoop-common/src/main/native/README ; no JIRA).

 I made an initial attempt to cherry pick the HADOOP-10373 patch from trunk
 into branch-2, with CHANGES.TXT being a dramatic enough change that it
 takes human intervention to patch.

 implication


1. committing to branch-2 with changes.txt in the same commit followed
by a cherry pick forwards works.
2. committing to trunk only backports reliably if the changes.txt files
are patched in a separate commit

 This is no different from SVN, except that an svn merge used different
 commands.

 I have not tried the git format-patch/git am option, which would be:


1. -use git am -3 to apply the patch to the HEAD of both branch-2 and
trunk
2. -patch changes.txt in each branch, then either commit separately
3. -or try and amend latest commit for the patches

 #3 seems appealing, but it'd make the diff on the two branches different.



 On 2 September 2014 19:01, Andrew Wang andrew.w...@cloudera.com wrote:

  This is basically what I did, make patches of each of my branches and
 then
  reapply to the new trunk. One small recommendation would be to make the
  remote named apache rather than asflive so it's consistent with the
  GitAndHadoop wikipage. IMO naming branches with a / (e.g. live/trunk)
  is also kind of ambiguous, since it's the same syntax used to specify a
  remote. It seems there can also be difficulties with directory and
  filenames.
 
  Somewhat related, it'd be nice to update the GitAndHadoop instructions on
  how to generate a patch using git-format-patch. I've been using plain old
  git diff for a while, but format-patch seems better. It'd be especially
  nice if a recommended .gitconfig section was made available :)
 
  I plan to play with format-patch some in the near future and might do
 this
  myself, but if any git gurus already have this ready to go, feel free to
  edit.
 
 
  On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran ste...@hortonworks.com
  wrote:
 
   Now that hadoop is using git, I'm migrating my various work-in-progress
   branches to the new commit tree
  
  
   1. This is the process I've written up for using git format-patch then
  git
   am to export the patch sequence and merge it in, then rebasing onto
 trunk
   to finally get in sync
  
   https://wiki.apache.org/hadoop/MigratingPrivateGitBranches
  
   2. The Git and hadoop docs cover git graft:
  
  
 
 https://wiki.apache.org/hadoop/GitAndHadoop#Grafts_for_complete_project_history
  
   I'm not sure if/how that relates
  
   Is there any easier way than what I've described for doing the move?
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



[jira] [Created] (HADOOP-11053) Depends on EOL commons-httpclient

2014-09-02 Thread Steven Noble (JIRA)
Steven Noble created HADOOP-11053:
-

 Summary: Depends on EOL commons-httpclient
 Key: HADOOP-11053
 URL: https://issues.apache.org/jira/browse/HADOOP-11053
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
 Environment: cdh5.1.0
Reporter: Steven Noble


Currently not excluding commons-httpclient which is EOL (see 
http://hc.apache.org/httpclient-3.x/) and appears to conflict with its 
replacement org.apache.httpcomponents:httpclient (see 
http://hc.apache.org/httpcomponents-client-ga/)

This appears to be introduced in 
https://issues.apache.org/jira/browse/HADOOP-9557.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11054) Add a KeyProvider instantiation based on a URI

2014-09-02 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11054:
---

 Summary: Add a KeyProvider instantiation based on a URI
 Key: HADOOP-11054
 URL: https://issues.apache.org/jira/browse/HADOOP-11054
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Andrew Wang


Currently there is no way to instantiate a {{KeyProvider}} given a URI.

In the case of HDFS encryption, it would be desirable to be explicitly specify 
a KeyProvider URI to avoid obscure misconfigurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11055) non-daemon pid files are missing

2014-09-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11055:
-

 Summary: non-daemon pid files are missing
 Key: HADOOP-11055
 URL: https://issues.apache.org/jira/browse/HADOOP-11055
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer
Priority: Blocker


Somewhere along the way, non-secure daemons run in default mode lost pid files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: migrating private branches to the new git repo

2014-09-02 Thread Todd Lipcon
On Tue, Sep 2, 2014 at 2:38 PM, Andrew Wang andrew.w...@cloudera.com
wrote:

 Not to derail the conversation, but if CHANGES.txt is making backports more
 annoying, why don't we get rid of it? It seems like we should be able to
 generate it via a JIRA query, and git log can also be used for a quick
 check (way faster than svn log).


+1, I've always found CHANGES.txt to be a big pain in the butt, and often
it gets incorrect, too.




 On Tue, Sep 2, 2014 at 12:38 PM, Steve Loughran ste...@hortonworks.com
 wrote:

  I've now done my first commits; one into trunk (10373), one into branch-2
  and cherry picked (fix in
  hadoop-common-project/hadoop-common/src/main/native/README ; no JIRA).
 
  I made an initial attempt to cherry pick the HADOOP-10373 patch from
 trunk
  into branch-2, with CHANGES.TXT being a dramatic enough change that it
  takes human intervention to patch.
 
  implication
 
 
 1. committing to branch-2 with changes.txt in the same commit followed
 by a cherry pick forwards works.
 2. committing to trunk only backports reliably if the changes.txt
 files
 are patched in a separate commit
 
  This is no different from SVN, except that an svn merge used different
  commands.
 
  I have not tried the git format-patch/git am option, which would be:
 
 
 1. -use git am -3 to apply the patch to the HEAD of both branch-2 and
 trunk
 2. -patch changes.txt in each branch, then either commit separately
 3. -or try and amend latest commit for the patches
 
  #3 seems appealing, but it'd make the diff on the two branches different.
 
 
 
  On 2 September 2014 19:01, Andrew Wang andrew.w...@cloudera.com wrote:
 
   This is basically what I did, make patches of each of my branches and
  then
   reapply to the new trunk. One small recommendation would be to make the
   remote named apache rather than asflive so it's consistent with the
   GitAndHadoop wikipage. IMO naming branches with a / (e.g.
 live/trunk)
   is also kind of ambiguous, since it's the same syntax used to specify a
   remote. It seems there can also be difficulties with directory and
   filenames.
  
   Somewhat related, it'd be nice to update the GitAndHadoop instructions
 on
   how to generate a patch using git-format-patch. I've been using plain
 old
   git diff for a while, but format-patch seems better. It'd be
 especially
   nice if a recommended .gitconfig section was made available :)
  
   I plan to play with format-patch some in the near future and might do
  this
   myself, but if any git gurus already have this ready to go, feel free
 to
   edit.
  
  
   On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran ste...@hortonworks.com
 
   wrote:
  
Now that hadoop is using git, I'm migrating my various
 work-in-progress
branches to the new commit tree
   
   
1. This is the process I've written up for using git format-patch
 then
   git
am to export the patch sequence and merge it in, then rebasing onto
  trunk
to finally get in sync
   
https://wiki.apache.org/hadoop/MigratingPrivateGitBranches
   
2. The Git and hadoop docs cover git graft:
   
   
  
 
 https://wiki.apache.org/hadoop/GitAndHadoop#Grafts_for_complete_project_history
   
I'm not sure if/how that relates
   
Is there any easier way than what I've described for doing the move?
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
  entity
   to
which it is addressed and may contain information that is
 confidential,
privileged and exempt from disclosure under applicable law. If the
  reader
of this message is not the intended recipient, you are hereby
 notified
   that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
   immediately
and delete it from your system. Thank You.
   
  
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 




-- 
Todd Lipcon
Software Engineer, Cloudera