[jira] [Resolved] (HADOOP-9191) TestAccessControlList and TestJobHistoryConfig fail with JDK7

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9191.
-

   Resolution: Fixed
Fix Version/s: (was: 1-win)
 Hadoop Flags: Reviewed

I committed the patch to both branch-1 and branch-1-win.

> TestAccessControlList and TestJobHistoryConfig fail with JDK7
> -
>
> Key: HADOOP-9191
> URL: https://issues.apache.org/jira/browse/HADOOP-9191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.2.0, 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 1.2.0
>
> Attachments: HADOOP-9191-branch-1.001.patch, 
> HADOOP-9191-branch-1-win.001.patch, HADOOP-9191.patch
>
>
> Individual test cases have dependencies on a specific order of execution and 
> fail when the order is changed.
> TestAccessControlList.testNetGroups relies on Groups being initialized with a 
> hard-coded test class that subsequent test cases depend on.
> TestJobHistoryConfig.testJobHistoryLogging fails to shutdown the 
> MiniDFSCluster on exit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9191) TestAccessControlList and TestJobHistoryConfig fail with JDK7

2013-01-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9191:
-

 Summary: TestAccessControlList and TestJobHistoryConfig fail with 
JDK7
 Key: HADOOP-9191
 URL: https://issues.apache.org/jira/browse/HADOOP-9191
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.2.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Individual test cases have depend on a specific order of execution and fail 
when the order is not changed.

TestAccessControlList.testNetGroups relies on Groups being initialized with a 
hard-coded test class that subsequent test cases depend on.

TestJobHistoryConfig.testJobHistoryLogging fails to shutdown the MiniDFSCluster 
on exit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9190) packaging docs is broken

2013-01-08 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9190:
-

 Summary: packaging docs is broken
 Key: HADOOP-9190
 URL: https://issues.apache.org/jira/browse/HADOOP-9190
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Thomas Graves


It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
site package -Pdist,docs no longer works.   If you run mvn site or mvn 
site:stage by itself they work fine, its when you go to package it that it 
breaks.

The error is with broken links, here is one of them:

broken-links>
  







  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9189) "mvn package -Pdist" generates the docs even without the -Pdocs option

2013-01-08 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9189:


 Summary: "mvn package -Pdist" generates the docs even without the 
-Pdocs option
 Key: HADOOP-9189
 URL: https://issues.apache.org/jira/browse/HADOOP-9189
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla


{{mvn package}} shouldn't generate docs without {{-Pdocs}} option being 
specified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9185) TestFileCreation.testFsClose should clean up on exit.

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9185.
-

   Resolution: Fixed
Fix Version/s: 1-win
 Hadoop Flags: Reviewed

I have committed this to branch-1-win.

> TestFileCreation.testFsClose should clean up on exit.
> -
>
> Key: HADOOP-9185
> URL: https://issues.apache.org/jira/browse/HADOOP-9185
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 1-win
>
> Attachments: HADOOP-9185.patch
>
>
> TestFileCreation.testFsClose fails to shutdown the MiniDFSCluster on 
> successful exit. Subsequent tests can fail with the following exception:
> java.io.IOException: Cannot remove data directory: 
> d:\w\hc7\build\test\data\dfs\data
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:266)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:125)
>   at 
> org.apache.hadoop.hdfs.TestFileCreation.testDeleteOnExit(TestFileCreation.java:261)
> This was seen with JDK7 since testcase execution order is randomized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Problem creating patch for HADOOP-9184

2013-01-08 Thread Jeremy Karn
Thanks!

On Tue, Jan 8, 2013 at 10:48 AM, Robert Evans  wrote:

> This is because your patch is against the 0.20 branch, not against trunk.
> Jenkins pre commit only works for trunk right now.  If the issue also
> exists on trunk then please provide a patch for trunk too, if it is a
> 1.0/0.20 specific issue then you
>


Re: Problem creating patch for HADOOP-9184

2013-01-08 Thread Robert Evans
This is because your patch is against the 0.20 branch, not against trunk.
Jenkins pre commit only works for trunk right now.  If the issue also
exists on trunk then please provide a patch for trunk too, if it is a
1.0/0.20 specific issue then you can run the pre commit tests yourself and
just post the results.

--Bobby

On 1/8/13 9:43 AM, "Jeremy Karn"  wrote:

>I opened this jira yesterday and tried to include a patch for the problem
>but the Jenkins pre-commit job keeps failing because it says it can't
>apply
>my patch 
>(https://builds.apache.org/job/PreCommit-HADOOP-Build/2012//console).
> I thought at first the problem was because I generated the patch with
>git,
>but I've since done a svn checkout, regenerated the patch file, and been
>able to apply the commit locally without a problem.
>
>Any help would be appreciated!  Thanks,
>
>-- 
>
>Jeremy Karn / Lead Developer
>MORTAR DATA / www.mortardata.com



Problem creating patch for HADOOP-9184

2013-01-08 Thread Jeremy Karn
I opened this jira yesterday and tried to include a patch for the problem
but the Jenkins pre-commit job keeps failing because it says it can't apply
my patch (https://builds.apache.org/job/PreCommit-HADOOP-Build/2012//console).
 I thought at first the problem was because I generated the patch with git,
but I've since done a svn checkout, regenerated the patch file, and been
able to apply the commit locally without a problem.

Any help would be appreciated!  Thanks,

-- 

Jeremy Karn / Lead Developer
MORTAR DATA / www.mortardata.com


[jira] [Created] (HADOOP-9188) FileUtil.CopyMerge can support optional headers and footers when merging files

2013-01-08 Thread Ranadip (JIRA)
Ranadip created HADOOP-9188:
---

 Summary: FileUtil.CopyMerge can support optional headers and 
footers when merging files
 Key: HADOOP-9188
 URL: https://issues.apache.org/jira/browse/HADOOP-9188
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ranadip


Similar to addString - which is added at the end of each merged file - there 
should be the option to add some rows of header strings and footer strings 
globally at the beginning and end of the single merged file.


Background: My experience has been that this method is mostly used to aggregate 
part-files into a single file that can be sent, for example, as a data feed 
file or a report to business customers. In such cases there is often a 
requirement to provide header and footer rows - with headers describing the 
schema of the data and the footers (or even headers in some cases) containing 
stats like total counts, bad record counts, etc. Along with [#HADOOP-9187], I 
have found myself replicating this method to add these features at least in 3 
different use cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9187) FileUtil.CopyMerge should handle compressed input and output

2013-01-08 Thread Ranadip (JIRA)
Ranadip created HADOOP-9187:
---

 Summary: FileUtil.CopyMerge should handle compressed input and 
output
 Key: HADOOP-9187
 URL: https://issues.apache.org/jira/browse/HADOOP-9187
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Ranadip


This method, if run on compressed input, results in corrupt output since it 
just does a byte-by-byte concatenation disregarding compression codecs. It 
should automatically detect compression codecs from input files and handle them 
intelligently. 
Additionally, there should be an option to create a compressed output file so 
that the output can be efficiently stored and sent out to customers (over the 
network outside the cluster).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Sorting user defined MR counters.

2013-01-08 Thread Steve Loughran
On 7 January 2013 15:57, Niels Basjes  wrote:

> Hi Steve,
>
> > Now for submitting changes for Hadoop: Is it desirable that I fix these
> in
> > > my change set or should I leave these as-is to avoid "obfuscating" the
> > > changes that are relevant to the Jira at hand?
> > >
> >
> > I recommend a cleanup first -that's likely to go in without any argument.
> > Your patch with the new features would be a diff against the clean, so
> have
> > less changes to be reviewed.
> >
>
>
> Ok, I'll have a look what I can do.
> Should I focus on fixing problems within the entire code base or limit my
> changes to a limited set of subprojects (i.e. only the mapreduce ones) ?
>
> --
>


I'd pick something self contained -cross-project changes are much harder to
get reviewed and in-


Build failed in Jenkins: Hadoop-Common-trunk #647

2013-01-08 Thread Apache Jenkins Server
See 

Changes:

[suresh] HDFS-4362. GetDelegationTokenResponseProto does not handle null token. 
Contributed by Suresh Srinivas.

[atm] HDFS-3970. Fix bug causing rollback of HDFS upgrade to result in bad 
VERSION file. Contributed by Vinay and Andrew Wang.

[szetszwo] Add target, .classpath, .project and .settings to svn:ignore.

[suresh] HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool. 
Contributed by Liang Xie.

[vinodkv] YARN-170. Change NodeManager stop to be reentrant. Contributed by 
Sandy Ryza.

[vinodkv] MAPREDUCE-4920. Use security token protobuf definition from hadoop 
common. Contributed by Suresh Srinivas.

[vinodkv] YARN-315. Using the common security token protobuf definition from 
hadoop common. Contributed by Suresh Srinivas.

--
[...truncated 31682 lines...]
 [exec] unpack-plugin:
 [exec] 
 [exec] install-plugin:
 [exec] 
 [exec] configure-plugin:
 [exec] 
 [exec] configure-output-plugin:
 [exec] Mounting output plugin: org.apache.forrest.plugin.output.pdf
 [exec] Processing 

 to 

 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginMountSnippet.xsl
 [exec] Moving 1 file to 

 [exec] 
 [exec] configure-plugin-locationmap:
 [exec] Mounting plugin locationmap for org.apache.forrest.plugin.output.pdf
 [exec] Processing 

 to 

 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginLmMountSnippet.xsl
 [exec] Moving 1 file to 

 [exec] 
 [exec] init:
 [exec] 
 [exec] -prepare-classpath:
 [exec] 
 [exec] check-contentdir:
 [exec] 
 [exec] examine-proj:
 [exec] 
 [exec] validation-props:
 [exec] Using these catalog descriptors: 
/home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:
 [exec] 
 [exec] validate-xdocs:
 [exec] 7 file(s) have been successfully validated.
 [exec] ...validated xdocs
 [exec] 
 [exec] validate-skinconf:
 [exec] Warning: 

 not found.
 [exec] 1 file(s) have been successfully validated.
 [exec] ...validated skinconf
 [exec] 
 [exec] validate-sitemap:
 [exec] 
 [exec] validate-skins-stylesheets:
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 

 [exec] Copying main skin images to site ...
 [exec] Created dir: 

 [exec] Copying 20 files to 

 [exec] Copying 14 files to 

 [exec] Warning: 

 not found.
 [exec] Warning: 


Build failed in Jenkins: Hadoop-Common-0.23-Build #488

2013-01-08 Thread Apache Jenkins Server
See 

--
[...truncated 18463 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.806 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.881 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.11 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.23 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.211 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.424 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.051 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.491 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.46 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.465 sec
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.645 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.202 sec
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.883 sec
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.484 sec
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.621 sec
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.841 sec
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec
Running org.apache.hadoop.io.TestText
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.875 sec
Running org.apache.hadoop.io.TestMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec
Running org.apache.hadoop.io.compress.TestCodecFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.375 sec
Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec
Running org.apache.hadoop.io.compress.TestCodec
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 55.753 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.397 sec
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.567 sec
Running org.apache.hadoop.io.TestWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0,