Re: Git repo ready to use

2014-09-04 Thread Sangjin Lee
It seems like the github mirror at https://github.com/apache/hadoop-common
has stopped getting updates as of 8/22. Could this mirror have been broken
by the git transition?

Thanks,
Sangjin


On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu yuzhih...@gmail.com wrote:

 From https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console :

 ERROR: No artifacts found that match the file pattern
 trunk/hadoop-hdfs-project/*/target/*.tar.gz. Configuration
 error?ERROR http://stacktrace.jenkins-ci.org/search?query=ERROR:
 ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match anything,
 but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s what
 you mean?


 I corrected the path to hdfs tar ball.


 FYI



 On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur t...@cloudera.com
 wrote:

  it seems we missed updating the HADOOP precommit job to use Git, it was
  still using SVN. I've just updated it.
 
  thx
 
 
  On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu yuzhih...@gmail.com wrote:
 
   Currently patchprocess/ (contents shown below) is one level higher than
   ${WORKSPACE}
  
   diffJavadocWarnings.txt
  newPatchFindbugsWarningshadoop-hdfs.html
patchFindBugsOutputhadoop-hdfs.txtpatchReleaseAuditOutput.txt
trunkJavadocWarnings.txt
   filteredPatchJavacWarnings.txt  newPatchFindbugsWarningshadoop-hdfs.xml
   patchFindbugsWarningshadoop-hdfs.xml  patchReleaseAuditWarnings.txt
   filteredTrunkJavacWarnings.txt  patch
   patchJavacWarnings.txttestrun_hadoop-hdfs.txt
   jirapatchEclipseOutput.txt
patchJavadocWarnings.txt  trunkJavacWarnings.txt
  
   Under Files to archive input box of PreCommit-HDFS-Build/configure, I
  saw:
  
   '../patchprocess/*' doesn't match anything, but '*' does. Perhaps
 that's
   what you mean?
  
   I guess once patchprocess is moved back under ${WORKSPACE}, a lot of
  things
   would be back to normal.
  
   Cheers
  
   On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur t...@cloudera.com
 
   wrote:
  
i'm also seeing broken links for javadocs warnings.
   
Alejandro
(phone typing)
   
 On Aug 28, 2014, at 20:00, Andrew Wang andrew.w...@cloudera.com
   wrote:

 I noticed that the JUnit test results aren't getting picked up
   anymore. I
 suspect we just need to update the path to the surefire output, but
   based
 on a quick examination I'm not sure what that is.

 Does someone mind taking another look?


 On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla 
  ka...@cloudera.com
 wrote:

 Thanks Giri and Ted for fixing the builds.


 On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu yuzhih...@gmail.com
  wrote:

 Charles:
 QA build is running for your JIRA:

  https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/

 Cheers


 On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb 
 cl...@cloudera.com
  
 wrote:

 On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:

 Fixed all the 3 pre-commit buids. test-patch's git reset --hard
  is
 removing
 the patchprocess dir, so moved it off the workspace.
 Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten 3
  or 4
 jenkins messages that indicated the problem so something is
 resubmitting,
 but now that you've fixed it, should I resubmit it again?

 Charles

   
  
 
 
 
  --
  Alejandro
 



Re: Git repo ready to use

2014-09-04 Thread Vinayakumar B
I think its still pointing to old svn repository which is just read only
now.

You can use latest mirror:
https://github.com/apache/hadoop

Regards,
Vinay
On Sep 4, 2014 11:37 AM, Sangjin Lee sjl...@gmail.com wrote:

 It seems like the github mirror at https://github.com/apache/hadoop-common
 has stopped getting updates as of 8/22. Could this mirror have been broken
 by the git transition?

 Thanks,
 Sangjin


 On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu yuzhih...@gmail.com wrote:

  From https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console :
 
  ERROR: No artifacts found that match the file pattern
  trunk/hadoop-hdfs-project/*/target/*.tar.gz. Configuration
  error?ERROR http://stacktrace.jenkins-ci.org/search?query=ERROR:
  ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match anything,
  but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s what
  you mean?
 
 
  I corrected the path to hdfs tar ball.
 
 
  FYI
 
 
 
  On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur t...@cloudera.com
  wrote:
 
   it seems we missed updating the HADOOP precommit job to use Git, it was
   still using SVN. I've just updated it.
  
   thx
  
  
   On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu yuzhih...@gmail.com wrote:
  
Currently patchprocess/ (contents shown below) is one level higher
 than
${WORKSPACE}
   
diffJavadocWarnings.txt
   newPatchFindbugsWarningshadoop-hdfs.html
 patchFindBugsOutputhadoop-hdfs.txtpatchReleaseAuditOutput.txt
 trunkJavadocWarnings.txt
filteredPatchJavacWarnings.txt
 newPatchFindbugsWarningshadoop-hdfs.xml
patchFindbugsWarningshadoop-hdfs.xml  patchReleaseAuditWarnings.txt
filteredTrunkJavacWarnings.txt  patch
patchJavacWarnings.txttestrun_hadoop-hdfs.txt
jirapatchEclipseOutput.txt
 patchJavadocWarnings.txt  trunkJavacWarnings.txt
   
Under Files to archive input box of PreCommit-HDFS-Build/configure, I
   saw:
   
'../patchprocess/*' doesn't match anything, but '*' does. Perhaps
  that's
what you mean?
   
I guess once patchprocess is moved back under ${WORKSPACE}, a lot of
   things
would be back to normal.
   
Cheers
   
On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur 
 t...@cloudera.com
  
wrote:
   
 i'm also seeing broken links for javadocs warnings.

 Alejandro
 (phone typing)

  On Aug 28, 2014, at 20:00, Andrew Wang andrew.w...@cloudera.com
 
wrote:
 
  I noticed that the JUnit test results aren't getting picked up
anymore. I
  suspect we just need to update the path to the surefire output,
 but
based
  on a quick examination I'm not sure what that is.
 
  Does someone mind taking another look?
 
 
  On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla 
   ka...@cloudera.com
  wrote:
 
  Thanks Giri and Ted for fixing the builds.
 
 
  On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu yuzhih...@gmail.com
   wrote:
 
  Charles:
  QA build is running for your JIRA:
 
   https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/
 
  Cheers
 
 
  On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb 
  cl...@cloudera.com
   
  wrote:
 
  On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:
 
  Fixed all the 3 pre-commit buids. test-patch's git reset
 --hard
   is
  removing
  the patchprocess dir, so moved it off the workspace.
  Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten
 3
   or 4
  jenkins messages that indicated the problem so something is
  resubmitting,
  but now that you've fixed it, should I resubmit it again?
 
  Charles
 

   
  
  
  
   --
   Alejandro
  
 



Build failed in Jenkins: Hadoop-Common-0.23-Build #1062

2014-09-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1062/

--
[...truncated 8263 lines...]
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.879 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.323 sec
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.047 sec
Running org.apache.hadoop.io.nativeio.TestNativeIO
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.158 sec
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec
Running org.apache.hadoop.io.TestMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.644 sec
Running org.apache.hadoop.io.TestUTF8
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.267 sec
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.042 sec
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.196 sec
Running org.apache.hadoop.io.TestSetFile
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.923 sec
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.317 sec
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.536 sec
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.686 sec
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec
Running org.apache.hadoop.util.TestJarFinder
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.778 sec
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.313 sec
Running org.apache.hadoop.util.TestHostsFileReader
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.486 sec
Running org.apache.hadoop.util.TestStringUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.14 sec
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec
Running org.apache.hadoop.util.TestAsyncDiskService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec
Running org.apache.hadoop.util.TestRunJar
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec
Running org.apache.hadoop.util.TestOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.203 sec
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.644 sec
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.116 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.428 sec
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.356 sec
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.67 sec
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.141 sec

Re: auto-generating changes.txt was: migrating private branches to the new git repo

2014-09-04 Thread Steve Loughran
is there any way of isolating compatible/incompatible changes,  new
features?

I know that any change is potentially incompatible —but it is still good to
highlight the things we know are likely to cause trouble


On 4 September 2014 02:51, Allen Wittenauer a...@altiscale.com wrote:

 Nothing official or clean or whatever, but just to give people an idea of
 what an auto generated CHANGES.txt file might look like, here are some
 sample runs of the hacky thing I built, based upon the fixVersion
 information.  It doesn't break it down by improvement, etc.  Also, the name
 on the end is the person who the JIRA is assigned.  If it is unassigned,
 then it comes out blank.  It's interesting to note that in the 2.5.1 notes,
 it does appear to have caught a commit missing from the changes.txt….

 2.5.1: http://pastebin.com/jXfz5wXz

 2.6.0: http://pastebin.com/5nkSsU18

 3.0.0: http://pastebin.com/3Ek4tP8d

 One thing I didn't do was append the previous versions onto these files,
 which is what I'd expect to happen.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-11061) Make Shell probe for winutils.exe more rigorous

2014-09-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11061:
---

 Summary: Make Shell probe for winutils.exe more rigorous
 Key: HADOOP-11061
 URL: https://issues.apache.org/jira/browse/HADOOP-11061
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.5.0
 Environment: windows
Reporter: Steve Loughran
Priority: Minor


The probe for winutils.exe being valid is simple: it looks for the file 
existing.

It could be stricter and catch some (unlikely but possible) failure modes:

# winutils.exe being a directory
# winutils.exe being a 0-byte file
# winutils.exe not being readable
# winutils.exe not having the magic MZ header

These tests could all be combined simply by opening the file and validating the 
header ... all the conditions above would be detected



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: auto-generating changes.txt was: migrating private branches to the new git repo

2014-09-04 Thread Allen Wittenauer

I hacked on it some more and yes, it’s very easy to detect:

* type of jira (improvement, bug, new feature, wish, task, sub-task)
* incompatible or not (regardless of type)
* reviewed or not (regardless of type)

A key question is what to do about tasks, sub-tasks, and wishes.  I 
haven’t tried yet, but I’m fairly confident that sub-tasks list what the parent 
is, so those can probably be handled specially.  We either need to list tasks 
and wishes as what they are or, in the script, turn them into something else.

Also added some sorting to it based upon number.  I’m not sure what 
order we’re getting from JIRA now.  Last updated time is going to be wonky with 
all the changes I did.  I didn’t look to see if there was a ‘resolve date’.

It should be noted that the release note script already created 
different files: a master one and one for each project.  That functionality was 
kept but I just shared the master one because it was easier. ;)


https://github.com/aw-altiscale/hadoop/blob/trunk/dev-support/changes.py is the 
current version if someone wants to look at it.

We do need to have a talk about 3.x though.  Looking over the list, it 
would appear that a lot of (what became) early 2.x JIRAs were marked as Fixed 
against 3.x.  Probably 2/3’rd of the JIRAs showing up!  I think it may be safe 
to just say “everything before date X should be set to 2.x”, but I’m not sure 
of what that date would be.  Doing that would chop the 3.x change list down to 
a much more realistic size, esp when compared to the manual changes file.

On Sep 4, 2014, at 4:38 AM, Steve Loughran ste...@hortonworks.com wrote:

 is there any way of isolating compatible/incompatible changes,  new
 features?
 
 I know that any change is potentially incompatible —but it is still good to
 highlight the things we know are likely to cause trouble
 
 
 On 4 September 2014 02:51, Allen Wittenauer a...@altiscale.com wrote:
 
 Nothing official or clean or whatever, but just to give people an idea of
 what an auto generated CHANGES.txt file might look like, here are some
 sample runs of the hacky thing I built, based upon the fixVersion
 information.  It doesn't break it down by improvement, etc.  Also, the name
 on the end is the person who the JIRA is assigned.  If it is unassigned,
 then it comes out blank.  It's interesting to note that in the 2.5.1 notes,
 it does appear to have caught a commit missing from the changes.txt….
 
 2.5.1: http://pastebin.com/jXfz5wXz
 
 2.6.0: http://pastebin.com/5nkSsU18
 
 3.0.0: http://pastebin.com/3Ek4tP8d
 
 One thing I didn't do was append the previous versions onto these files,
 which is what I'd expect to happen.
 
 -- 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader 
 of this message is not the intended recipient, you are hereby notified that 
 any printing, copying, dissemination, distribution, disclosure or 
 forwarding of this communication is strictly prohibited. If you have 
 received this communication in error, please contact the sender immediately 
 and delete it from your system. Thank You.



Re: Git repo ready to use

2014-09-04 Thread Sangjin Lee
That's good to know. Thanks.


On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B vinayakum...@apache.org
wrote:

 I think its still pointing to old svn repository which is just read only
 now.

 You can use latest mirror:
 https://github.com/apache/hadoop

 Regards,
 Vinay
 On Sep 4, 2014 11:37 AM, Sangjin Lee sjl...@gmail.com wrote:

  It seems like the github mirror at
 https://github.com/apache/hadoop-common
  has stopped getting updates as of 8/22. Could this mirror have been
 broken
  by the git transition?
 
  Thanks,
  Sangjin
 
 
  On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu yuzhih...@gmail.com wrote:
 
   From https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console :
  
   ERROR: No artifacts found that match the file pattern
   trunk/hadoop-hdfs-project/*/target/*.tar.gz. Configuration
   error?ERROR http://stacktrace.jenkins-ci.org/search?query=ERROR:
   ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match anything,
   but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s what
   you mean?
  
  
   I corrected the path to hdfs tar ball.
  
  
   FYI
  
  
  
   On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur t...@cloudera.com
 
   wrote:
  
it seems we missed updating the HADOOP precommit job to use Git, it
 was
still using SVN. I've just updated it.
   
thx
   
   
On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu yuzhih...@gmail.com wrote:
   
 Currently patchprocess/ (contents shown below) is one level higher
  than
 ${WORKSPACE}

 diffJavadocWarnings.txt
newPatchFindbugsWarningshadoop-hdfs.html
  patchFindBugsOutputhadoop-hdfs.txtpatchReleaseAuditOutput.txt
  trunkJavadocWarnings.txt
 filteredPatchJavacWarnings.txt
  newPatchFindbugsWarningshadoop-hdfs.xml
 patchFindbugsWarningshadoop-hdfs.xml  patchReleaseAuditWarnings.txt
 filteredTrunkJavacWarnings.txt  patch
 patchJavacWarnings.txttestrun_hadoop-hdfs.txt
 jirapatchEclipseOutput.txt
  patchJavadocWarnings.txt  trunkJavacWarnings.txt

 Under Files to archive input box of
 PreCommit-HDFS-Build/configure, I
saw:

 '../patchprocess/*' doesn't match anything, but '*' does. Perhaps
   that's
 what you mean?

 I guess once patchprocess is moved back under ${WORKSPACE}, a lot
 of
things
 would be back to normal.

 Cheers

 On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur 
  t...@cloudera.com
   
 wrote:

  i'm also seeing broken links for javadocs warnings.
 
  Alejandro
  (phone typing)
 
   On Aug 28, 2014, at 20:00, Andrew Wang 
 andrew.w...@cloudera.com
  
 wrote:
  
   I noticed that the JUnit test results aren't getting picked up
 anymore. I
   suspect we just need to update the path to the surefire output,
  but
 based
   on a quick examination I'm not sure what that is.
  
   Does someone mind taking another look?
  
  
   On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla 
ka...@cloudera.com
   wrote:
  
   Thanks Giri and Ted for fixing the builds.
  
  
   On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu yuzhih...@gmail.com
 
wrote:
  
   Charles:
   QA build is running for your JIRA:
  
https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/
  
   Cheers
  
  
   On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb 
   cl...@cloudera.com

   wrote:
  
   On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:
  
   Fixed all the 3 pre-commit buids. test-patch's git reset
  --hard
is
   removing
   the patchprocess dir, so moved it off the workspace.
   Thanks Giri. Should I resubmit HDFS-6954's patch? I've
 gotten
  3
or 4
   jenkins messages that indicated the problem so something is
   resubmitting,
   but now that you've fixed it, should I resubmit it again?
  
   Charles
  
 

   
   
   
--
Alejandro
   
  
 



[jira] [Created] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-04 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11062:
---

 Summary: CryptoCodec testcases requiring OpenSSL should be run 
only if -Pnative is used
 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Andrew Wang


there are a few testcases, cryptocodec related that require Hadoop native code 
and OpenSSL.

These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11063) KMS cannot deploy on Windows, because class names are too long.

2014-09-04 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-11063:
--

 Summary: KMS cannot deploy on Windows, because class names are too 
long.
 Key: HADOOP-11063
 URL: https://issues.apache.org/jira/browse/HADOOP-11063
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker


Windows has a maximum path length of 260 characters.  KMS includes several long 
class file names.  During packaging and creation of the distro, these paths get 
even longer because of prepending the standard war directory structure and our 
share/hadoop/etc. structure.  The end result is that the final paths are longer 
than 260 characters, making it impossible to deploy a distro on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)