[jira] [Created] (HDFS-5106) TestDatanodeBlockScanner fails on Windows due to incorrect path format

2013-08-15 Thread Chuan Liu (JIRA)
Chuan Liu created HDFS-5106:
---

 Summary: TestDatanodeBlockScanner fails on Windows due to 
incorrect path format
 Key: HDFS-5106
 URL: https://issues.apache.org/jira/browse/HDFS-5106
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


The test case fails on Windows due to the wrong baseline path name used for 
comparison.

{noformat}
Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 69.436 sec <<< 
FAILURE!
testReplicaInfoParsing(org.apache.hadoop.hdfs.TestDatanodeBlockScanner)  Time 
elapsed: 9 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[/data/current/]finalized> but 
was:<[E:\data\current\]finalized>
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testReplicaInfoParsingSingle(TestDatanodeBlockScanner.java:459)
at 
org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testReplicaInfoParsing(TestDatanodeBlockScanner.java:447)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5105) TestFsck fails on Windows

2013-08-15 Thread Chuan Liu (JIRA)
Chuan Liu created HDFS-5105:
---

 Summary: TestFsck fails on Windows
 Key: HDFS-5105
 URL: https://issues.apache.org/jira/browse/HDFS-5105
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


TestFsck fails on Windows for two reasons.

# log audit file is not closed in previous test case, which leads to the 
deletion failure on Windows in the next test case. Because the file is not 
deleted, it is written twice and the output does not match the expectation.
# When comparing output string, we used '\n' while the line ending is '\r\n' on 
Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [ACTION NEEDED]: protoc 2.5.0 in trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta

2013-08-15 Thread Tsuyoshi OZAWA
Thanks for sharing! We also need to update Wiki or some documents, don't we?
http://wiki.apache.org/hadoop/HowToContribute

On Thu, Aug 15, 2013 at 8:03 AM, Alejandro Abdelnur  wrote:
> Following up on this.
>
> HADOOP-9845 & HADOOP-9872 have been committed
> to trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta.
>
> All Hadoop developers must install protoc 2.5.0 in their development
> machines for the build to run.
>
> All Hadoop jenkins boxes are using protoc 2.5.0
>
> The BUILDING.txt file has been updated to reflect that protoc 2.5.0 is the
> required one and includes instructions on how to use a different protoc
> from multiple local versions (using an ENV var). This may be handy for
> folks working with Hadoop versions using protoc 2.4.1.
>
> INTERIM SOLUTION IF YOU CANNOT UPGRADE TO PROTOC 2.5.0 IMMEDIATELY
>
> Use the following option with all your Maven commands
>  '-Dprotobuf.version=2.4.1'.
>
> Note that this option will make the build use protoc and protobuf 2.4.1.
>
> Though you should upgrade to 2.5.0 at the earliest.
>
> As soon as we start using the new goodies from protobuf 2.5.0 (like the
> non-copy bytearrays) 2.4.1 will not work anymore.
>
> Thanks and apologies again for the noise through out this change.
>
> --
> Alejandro



-- 
- Tsuyoshi


[VOTE] Release Apache Hadoop 2.0.6-alpha (RC1)

2013-08-15 Thread Konstantin Boudnik
All,

I have created a release candidate (rc1) for hadoop-2.0.6-alpha that I would
like to release.

This is a stabilization release that includes fixed for a couple a of issues
as outlined on the security list.

The RC is available at: http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc1/
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc1

The maven artifacts are available via repository.apache.org.

The only difference between rc0 and rc1 is ASL added to releasenotes.html and
updated release dates in CHANGES.txt files.

Please try the release bits and vote; the vote will run for the usual 7 days.

Thanks for your voting
  Cos



signature.asc
Description: Digital signature


Re: [VOTE] Release Apache Hadoop 2.0.6-alpha

2013-08-15 Thread Konstantin Boudnik
Sure Alejandro, not a big deal - I agree. Uploaded RC1 and restartig the vote
right away. Thanks for catching this!

Cos

On Thu, Aug 15, 2013 at 09:08PM, Alejandro Abdelnur wrote:
> it should be straight forward adding the license headers to the release
> notes. please make sure apache-rat:check passes on the RC before publishing
> it.
> 
> Arun, as you are about to cut the new RC for 2.1.0-beta, can you please
> make sure the license headers are used in the releasenotes HTML files?
> 
> Thx
> 
> 
> On Thu, Aug 15, 2013 at 8:02 PM, Konstantin Boudnik  wrote:
> 
> > Alejandro,
> >
> > looking into the source code: it seems that release notes never had license
> > boilerplate in it, hence 2.0.6-alpha doesn't have as well.
> >
> > I have fixed CHANGES with new optimistic date of the release and upload rc1
> > right now.
> >
> > Please let me know if you feel like we need start doing the license for the
> > releasenotes in this release.
> >
> > Thanks,
> >   Cos
> >
> > On Wed, Aug 14, 2013 at 10:40AM, Alejandro Abdelnur wrote:
> > > OK:
> > > * verified MD5
> > > * verified signature
> > > * expanded source tar and did a build
> > > * configured pseudo cluster and run a couple of example MR jobs
> > > * did a few HTTP calls to HTTFS
> > >
> > > NOT OK:
> > > * CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date
> > the
> > > RC vote ends
> > > * 'mvn apache-rat:check' fails, releasenotes HTML files don't have
> > license
> > > headers,
> > >
> > > I think we need to address the NO OK points (specially the last one),
> > they
> > > are trivial.
> > >
> > > Thanks.
> > >
> > >
> > >
> > > On Sat, Aug 10, 2013 at 5:46 PM, Konstantin Boudnik 
> > wrote:
> > >
> > > > All,
> > > >
> > > > I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I
> > > > would
> > > > like to release.
> > > >
> > > > This is a stabilization release that includes fixed for a couple a of
> > > > issues
> > > > as outlined on the security list.
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/
> > > > The RC tag in svn is here:
> > > >
> > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0
> > > >
> > > > The maven artifacts are available via repository.apache.org.
> > > >
> > > > Please try the release bits and vote; the vote will run for the usual 7
> > > > days.
> > > >
> > > > Thanks for your voting
> > > >   Cos
> > > >
> > > >
> > >
> > >
> > > --
> > > Alejandro
> >
> 
> 
> 
> -- 
> Alejandro


signature.asc
Description: Digital signature


Re: [VOTE] Release Apache Hadoop 2.0.6-alpha

2013-08-15 Thread Alejandro Abdelnur
it should be straight forward adding the license headers to the release
notes. please make sure apache-rat:check passes on the RC before publishing
it.

Arun, as you are about to cut the new RC for 2.1.0-beta, can you please
make sure the license headers are used in the releasenotes HTML files?

Thx


On Thu, Aug 15, 2013 at 8:02 PM, Konstantin Boudnik  wrote:

> Alejandro,
>
> looking into the source code: it seems that release notes never had license
> boilerplate in it, hence 2.0.6-alpha doesn't have as well.
>
> I have fixed CHANGES with new optimistic date of the release and upload rc1
> right now.
>
> Please let me know if you feel like we need start doing the license for the
> releasenotes in this release.
>
> Thanks,
>   Cos
>
> On Wed, Aug 14, 2013 at 10:40AM, Alejandro Abdelnur wrote:
> > OK:
> > * verified MD5
> > * verified signature
> > * expanded source tar and did a build
> > * configured pseudo cluster and run a couple of example MR jobs
> > * did a few HTTP calls to HTTFS
> >
> > NOT OK:
> > * CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date
> the
> > RC vote ends
> > * 'mvn apache-rat:check' fails, releasenotes HTML files don't have
> license
> > headers,
> >
> > I think we need to address the NO OK points (specially the last one),
> they
> > are trivial.
> >
> > Thanks.
> >
> >
> >
> > On Sat, Aug 10, 2013 at 5:46 PM, Konstantin Boudnik 
> wrote:
> >
> > > All,
> > >
> > > I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I
> > > would
> > > like to release.
> > >
> > > This is a stabilization release that includes fixed for a couple a of
> > > issues
> > > as outlined on the security list.
> > >
> > > The RC is available at:
> > > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/
> > > The RC tag in svn is here:
> > >
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0
> > >
> > > The maven artifacts are available via repository.apache.org.
> > >
> > > Please try the release bits and vote; the vote will run for the usual 7
> > > days.
> > >
> > > Thanks for your voting
> > >   Cos
> > >
> > >
> >
> >
> > --
> > Alejandro
>



-- 
Alejandro


Re: [VOTE] Release Apache Hadoop 2.0.6-alpha

2013-08-15 Thread Konstantin Boudnik
Alejandro,

looking into the source code: it seems that release notes never had license
boilerplate in it, hence 2.0.6-alpha doesn't have as well.

I have fixed CHANGES with new optimistic date of the release and upload rc1
right now.

Please let me know if you feel like we need start doing the license for the
releasenotes in this release.

Thanks,
  Cos

On Wed, Aug 14, 2013 at 10:40AM, Alejandro Abdelnur wrote:
> OK:
> * verified MD5
> * verified signature
> * expanded source tar and did a build
> * configured pseudo cluster and run a couple of example MR jobs
> * did a few HTTP calls to HTTFS
> 
> NOT OK:
> * CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date the
> RC vote ends
> * 'mvn apache-rat:check' fails, releasenotes HTML files don't have license
> headers,
> 
> I think we need to address the NO OK points (specially the last one), they
> are trivial.
> 
> Thanks.
> 
> 
> 
> On Sat, Aug 10, 2013 at 5:46 PM, Konstantin Boudnik  wrote:
> 
> > All,
> >
> > I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I
> > would
> > like to release.
> >
> > This is a stabilization release that includes fixed for a couple a of
> > issues
> > as outlined on the security list.
> >
> > The RC is available at:
> > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/
> > The RC tag in svn is here:
> > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0
> >
> > The maven artifacts are available via repository.apache.org.
> >
> > Please try the release bits and vote; the vote will run for the usual 7
> > days.
> >
> > Thanks for your voting
> >   Cos
> >
> >
> 
> 
> -- 
> Alejandro


signature.asc
Description: Digital signature


[jira] [Created] (HDFS-5104) Support dotdot name in NFS LOOKUP operation

2013-08-15 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5104:


 Summary: Support dotdot name in NFS LOOKUP operation
 Key: HDFS-5104
 URL: https://issues.apache.org/jira/browse/HDFS-5104
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li


Procedure LOOKUP searches a directory for a specific name and returns the file 
handle for the corresponding file system object. NFS client sets filename as 
".." to get the parent directory information.

Currently ".." is considered invalid name component. We only allow ".." when 
the path is an inodeID path as "/.reserved/.inodes/.."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.0.6-alpha

2013-08-15 Thread Konstantin Boudnik
Darn CHANGES - apparently they just hate me. I agree that needs to be
addressed. I will spin-up rc1 tonight and restart the vote.

Thanks,
  Cos

On Wed, Aug 14, 2013 at 10:40AM, Alejandro Abdelnur wrote:
> OK:
> * verified MD5
> * verified signature
> * expanded source tar and did a build
> * configured pseudo cluster and run a couple of example MR jobs
> * did a few HTTP calls to HTTFS
> 
> NOT OK:
> * CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date the
> RC vote ends
> * 'mvn apache-rat:check' fails, releasenotes HTML files don't have license
> headers,
> 
> I think we need to address the NO OK points (specially the last one), they
> are trivial.
> 
> Thanks.
> 
> 
> 
> On Sat, Aug 10, 2013 at 5:46 PM, Konstantin Boudnik  wrote:
> 
> > All,
> >
> > I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I
> > would
> > like to release.
> >
> > This is a stabilization release that includes fixed for a couple a of
> > issues
> > as outlined on the security list.
> >
> > The RC is available at:
> > http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/
> > The RC tag in svn is here:
> > http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0
> >
> > The maven artifacts are available via repository.apache.org.
> >
> > Please try the release bits and vote; the vote will run for the usual 7
> > days.
> >
> > Thanks for your voting
> >   Cos
> >
> >
> 
> 
> -- 
> Alejandro


[jira] [Created] (HDFS-5103) TestDirectoryScanner fails on Windows

2013-08-15 Thread Chuan Liu (JIRA)
Chuan Liu created HDFS-5103:
---

 Summary: TestDirectoryScanner fails on Windows
 Key: HDFS-5103
 URL: https://issues.apache.org/jira/browse/HDFS-5103
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


TestDirectoryScanner fails on Windows due to mixed use of Windows path and Unix 
path.

{noformat}
org.junit.ComparisonFailure: 
expected:<[/base/current/BP-783049782-127.0.0.1-1370971773491/]finalized> but 
was:<[C:\base\current\BP-783049782-127.0.0.1-1370971773491\]finalized>
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testScanInfoObject(TestDirectoryScanner.java:419)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.TestScanInfo(TestDirectoryScanner.java:449)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5102) Snapshot names should not be allowed to contain slash characters

2013-08-15 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-5102:


 Summary: Snapshot names should not be allowed to contain slash 
characters
 Key: HDFS-5102
 URL: https://issues.apache.org/jira/browse/HDFS-5102
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.1.0-beta
Reporter: Aaron T. Myers


Snapshots of a snapshottable directory are allowed to have arbitrary names. 
Presently, if you create a snapshot with a snapshot name that begins with a "/" 
character, this will be allowed, but later attempts to access this snapshot 
will fail because of the way the {{Path}} class deals with consecutive "/" 
characters. I suggest we disallow "/" from appearing in snapshot names.

An example of this is in the first comment on this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Secure deletion of blocks

2013-08-15 Thread Todd Lipcon
Hi Matt,

I'd also recommend implementing this in a somewhat pluggable way -- eg a
configuration for a Deleter class. The default Deleter can be the one we
use today which just removes the file, and you could plug in a
SecureDeleter. I'd also see some use cases for a Deleter implementation
which doesn't actually delete the block, but instead moves it to a local
trash directory which is deleted a day or two later. This sort of policy
can help recover data as a last ditch effort if there is some kind of
accidental deletion and there aren't snapshots in place.

-Todd

On Thu, Aug 15, 2013 at 11:50 AM, Andrew Wang wrote:

> Hi Matt,
>
> Here are some code pointers:
>
> - When doing a file deletion, the NameNode turns the file into a set of
> blocks that need to be deleted.
> - When datanodes heartbeat in to the NN (see BPServiceActor#offerService),
> the NN replies with blocks to be invalidated (see BlockCommand and
> DatanodeProtocol.DNA_INVALIDATE).
> - The DN processes these invalidates in
> BPServiceActor#processCommandFromActive (look for DNA_INVALIDATE again).
> - The magic lines you're looking for are probably in
> FsDatasetAsyncDiskService#run, since we delete blocks in the background
>
> Best,
> Andrew
>
>
> On Thu, Aug 15, 2013 at 5:31 AM, Matt Fellows <
> matt.fell...@bespokesoftware.com> wrote:
>
> > Hi,
> > I'm looking into writing a patch for HDFS which will provide a new method
> > within HDFS which can securely delete the contents of a block on all the
> > nodes upon which it exists. By securely delete I mean, overwrite with
> > 1's/0's/random data cyclically such that the data could not be recovered
> > forensically.
> >
> > I'm not currently aware of any existing code / methods which provide
> this,
> > so was going to implement this myself.
> >
> > I figured the DataNode.java was probably the place to start looking into
> > how this could be done, so I've read the source for this, but it's not
> > really enlightened me a massive amount.
> >
> > I'm assuming I need to tell the NameServer that all DataNodes with a
> > particular block id would be required to be deleted, then as each
> DataNode
> > calls home, the DataNode would be instructed to securely delete the
> > relevant block, and it would oblige.
> >
> > Unfortunately I have no idea where to begin and was looking for some
> > pointers?
> >
> > I guess specifically I'd like to know:
> >
> > 1. Where the hdfs CLI commands are implemented
> > 2. How a DataNode identifies a block / how a NameServer could inform a
> > DataNode to delete a block
> > 3. Where the existing "delete" is implemented so I can make sure my
> secure
> > delete makes use of it after successfully blanking the block contents
> > 4. If I've got the right idea about this at all?
> >
> > Kind regards,
> > Matt Fellows
> >
> > --
> > [image: cid:1CBF4038-3F0F-4FC2-A1FF-6DC81B8B6F94]
> >  First Option Software Ltd
> > Signal House
> > Jacklyns Lane
> > Alresford
> > SO24 9JJ
> > Tel: +44 (0)1962 738232
> > Mob: +44 (0)7710 160458
> > Fax: +44 (0)1962 600112
> > Web: www.b espokesoftware.com<
> http://bespokesoftware.com/>
> >
> > __**__
> >
> > This is confidential, non-binding and not company endorsed - see full
> > terms at www.fosolutions.co.uk/**emailpolicy.html<
> http://www.fosolutions.co.uk/emailpolicy.html>
> >
> > First Option Software Ltd Registered No. 06340261
> > Signal House, Jacklyns Lane, Alresford, Hampshire, SO24 9JJ, U.K.
> > __**__
> >
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera


[VOTE] Release Apache Hadoop 2.1.0-beta

2013-08-15 Thread Arun C Murthy
Folks,

I've created a release candidate (rc2) for hadoop-2.1.0-beta that I would like 
to get released - this fixes the bugs we saw since the last go-around (rc1).

The RC is available at: 
http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc2/
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc2

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HDFS-5101) ZCR should work with blocks bigger than 2 GB

2013-08-15 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5101:
--

 Summary: ZCR should work with blocks bigger than 2 GB
 Key: HDFS-5101
 URL: https://issues.apache.org/jira/browse/HDFS-5101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Colin Patrick McCabe
Priority: Minor


Zero-copy reads should work with blocks bigger than 2 GB.  This will a little 
tricky, because a single ByteBuffer can only span 2 GB in Java (due to the use 
of a signed 4-byte int for sizes, etc).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-2984) S-live: Rate operation count for delete is worse than 0.20.204 by 28.8%

2013-08-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HDFS-2984.


Resolution: Cannot Reproduce

> S-live: Rate operation count for delete is worse than 0.20.204 by 28.8%
> ---
>
> Key: HDFS-2984
> URL: https://issues.apache.org/jira/browse/HDFS-2984
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks
>Affects Versions: 0.23.1
>Reporter: Vinay Kumar Thota
>Assignee: Ravi Prakash
>Priority: Critical
> Attachments: slive.tar.gz
>
>
> Rate operation count for delete is worse than 0.20.204.xx by 28.8%

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Secure deletion of blocks

2013-08-15 Thread Andrew Wang
Hi Matt,

Here are some code pointers:

- When doing a file deletion, the NameNode turns the file into a set of
blocks that need to be deleted.
- When datanodes heartbeat in to the NN (see BPServiceActor#offerService),
the NN replies with blocks to be invalidated (see BlockCommand and
DatanodeProtocol.DNA_INVALIDATE).
- The DN processes these invalidates in
BPServiceActor#processCommandFromActive (look for DNA_INVALIDATE again).
- The magic lines you're looking for are probably in
FsDatasetAsyncDiskService#run, since we delete blocks in the background

Best,
Andrew


On Thu, Aug 15, 2013 at 5:31 AM, Matt Fellows <
matt.fell...@bespokesoftware.com> wrote:

> Hi,
> I'm looking into writing a patch for HDFS which will provide a new method
> within HDFS which can securely delete the contents of a block on all the
> nodes upon which it exists. By securely delete I mean, overwrite with
> 1's/0's/random data cyclically such that the data could not be recovered
> forensically.
>
> I'm not currently aware of any existing code / methods which provide this,
> so was going to implement this myself.
>
> I figured the DataNode.java was probably the place to start looking into
> how this could be done, so I've read the source for this, but it's not
> really enlightened me a massive amount.
>
> I'm assuming I need to tell the NameServer that all DataNodes with a
> particular block id would be required to be deleted, then as each DataNode
> calls home, the DataNode would be instructed to securely delete the
> relevant block, and it would oblige.
>
> Unfortunately I have no idea where to begin and was looking for some
> pointers?
>
> I guess specifically I'd like to know:
>
> 1. Where the hdfs CLI commands are implemented
> 2. How a DataNode identifies a block / how a NameServer could inform a
> DataNode to delete a block
> 3. Where the existing "delete" is implemented so I can make sure my secure
> delete makes use of it after successfully blanking the block contents
> 4. If I've got the right idea about this at all?
>
> Kind regards,
> Matt Fellows
>
> --
> [image: cid:1CBF4038-3F0F-4FC2-A1FF-6DC81B8B6F94]
>  First Option Software Ltd
> Signal House
> Jacklyns Lane
> Alresford
> SO24 9JJ
> Tel: +44 (0)1962 738232
> Mob: +44 (0)7710 160458
> Fax: +44 (0)1962 600112
> Web: www.b 
> espokesoftware.com
>
> __**__
>
> This is confidential, non-binding and not company endorsed - see full
> terms at 
> www.fosolutions.co.uk/**emailpolicy.html
>
> First Option Software Ltd Registered No. 06340261
> Signal House, Jacklyns Lane, Alresford, Hampshire, SO24 9JJ, U.K.
> __**__
>
>


mapred replication

2013-08-15 Thread Sirianni, Eric
In debugging some replication issues in our HDFS environment, I noticed that 
the MapReduce framework uses the following algorithm for setting the 
replication on submitted job files:

1. Create the file with *default* DFS replication factor (i.e. 
'dfs.replication')

2. Subsequently alter the replication of the file based on the 
'mapred.submit.replication' config value

  private static FSDataOutputStream createFile(FileSystem fs, Path splitFile,
  Configuration job)  throws IOException {
FSDataOutputStream out = FileSystem.create(fs, splitFile,
new FsPermission(JobSubmissionFiles.JOB_FILE_PERMISSION));
int replication = job.getInt("mapred.submit.replication", 10);
fs.setReplication(splitFile, (short)replication);
writeSplitHeader(out);
return out;
  }

If I understand currectly, the net functional effect of this approach is that

-   The initial write pipeline is setup with 'dfs.replication' nodes (i.e. 
3)

-   The namenode triggers additional inter-datanode replications in the 
background (as it detects the blocks as "under-replicated").

I'm assuming this is intentional?  Alternatively, if the 
mapred.submit.replication was specified on initial create, the write pipeline 
would be significantly larger.

The reason I noticed is that we had inadvertently specified 
mapred.submit.replication as *less than* dfs.replication in our configuration, 
which caused a bunch of excess replica pruning (and ultimately IOExceptions in 
our datanode logs).

Thanks,
Eric



Hadoop-Hdfs-trunk - Build # 1492 - Failure

2013-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1492/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11769 lines...]
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:40:50.176s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [1.798s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:40:52.810s
[INFO] Finished at: Thu Aug 15 13:15:16 UTC 2013
[INFO] Final Memory: 36M/308M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4898
Updating HDFS-5068
Updating HADOOP-9652
Updating HDFS-5051
Updating YARN-1045
Updating YARN-337
Updating HDFS-5079
Updating YARN-1056
Updating HDFS-4816
Updating HADOOP-9872
Updating HADOOP-9381
Updating HDFS-4632
Updating HADOOP-9875
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1492

2013-08-15 Thread Apache Jenkins Server
See 

Changes:

[sseth] YARN-1045. Improve toString implementation for PBImpls. Contributed by 
Jian He.

[cnauroth] HDFS-4632. globStatus using backslash for escaping does not work on 
Windows. Contributed by Chuan Liu.

[szetszwo] HDFS-4898. BlockPlacementPolicyWithNodeGroup.chooseRemoteRack() 
fails to properly fallback to local rack.

[cmccabe] HADOOP-9875.  TestDoAsEffectiveUser can fail on JDK 7.  (Aaron T. 
Myers via Colin Patrick McCabe)

[acmurthy] YARN-1056. Remove dual use of string 'resourcemanager' in 
yarn.resourcemanager.connect.{max.wait.secs|retry_interval.secs}. Contributed 
by Karthik Kambatla.

[shv] HDFS-5079. Cleaning up NNHAStatusHeartbeat.State from 
DatanodeProtocolProtos. Contributed by Tao Luo.

[shv] HDFS-5068. Convert NNThroughputBenchmark to a Tool to allow generic 
options. Contributed by Konstantin Shvachko.

[suresh] HDFS-5051. nn fails to download checkpointed image from snn in some 
setups. Contributed by Vinay and Suresh Srinivas.

[wang] HDFS-4816. transitionToActive blocks if the SBN is doing checkpoint 
image transfer. (Andrew Wang)

[suresh] HADOOP-9381. Document dfs cp -f option. Contributed by Keegan Witt and 
Suresh Srinivas.

[cmccabe] HADOOP-9652.  RawLocalFs#getFileLinkStatus does not fill in the link 
owner and mode.  (Andrew Wang via Colin Patrick McCabe)

[tucu] HADOOP-9872. Improve protoc version handling and detection. (tucu)

[llu] HADOOP 9871. Fix intermittent findbugs warnings in DefaultMetricsSystem. 
(Junping Du via llu)

[jlowe] YARN-337. RM handles killed application tracking URL poorly. 
Contributed by Jason Lowe

--
[...truncated 11576 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.304 sec
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 374.086 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.194 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.825 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.249 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.291 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.52 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.128 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.934 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.426 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.195 sec
Running org.apache.hadoop.hdfs.TestFileInputStreamCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.205 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.042 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.398 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.894 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.087 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.449 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.923 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.586 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.019 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.569 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.446 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.247 sec
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.319 sec
Running org.apache.hadoop.hdfs

Secure deletion of blocks

2013-08-15 Thread Matt Fellows
Hi,
I'm looking into writing a patch for HDFS which will provide a new method
within HDFS which can securely delete the contents of a block on all the
nodes upon which it exists. By securely delete I mean, overwrite with
1's/0's/random data cyclically such that the data could not be recovered
forensically.

I'm not currently aware of any existing code / methods which provide this,
so was going to implement this myself.

I figured the DataNode.java was probably the place to start looking into
how this could be done, so I've read the source for this, but it's not
really enlightened me a massive amount.

I'm assuming I need to tell the NameServer that all DataNodes with a
particular block id would be required to be deleted, then as each DataNode
calls home, the DataNode would be instructed to securely delete the
relevant block, and it would oblige.

Unfortunately I have no idea where to begin and was looking for some
pointers?

I guess specifically I'd like to know:

1. Where the hdfs CLI commands are implemented
2. How a DataNode identifies a block / how a NameServer could inform a
DataNode to delete a block
3. Where the existing "delete" is implemented so I can make sure my secure
delete makes use of it after successfully blanking the block contents
4. If I've got the right idea about this at all?

Kind regards,
Matt Fellows

-- 
[image: cid:1CBF4038-3F0F-4FC2-A1FF-6DC81B8B6F94]
 First Option Software Ltd
Signal House
Jacklyns Lane
Alresford
SO24 9JJ
Tel: +44 (0)1962 738232
Mob: +44 (0)7710 160458
Fax: +44 (0)1962 600112
Web: www.b 
espokesoftware.com

-- 


This is confidential, non-binding and not company endorsed - see full terms 
at www.fosolutions.co.uk/emailpolicy.html 
First Option Software Ltd Registered No. 06340261
Signal House, Jacklyns Lane, Alresford, Hampshire, SO24 9JJ, U.K.




Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #700

2013-08-15 Thread Apache Jenkins Server
See 

Changes:

[jlowe] svn merge -c 1513888 FIXES: YARN-337. RM handles killed application 
tracking URL poorly. Contributed by Jason Lowe

--
[...truncated 7673 lines...]
[ERROR] location: package com.google.protobuf
[ERROR] 
:[270,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 


Hadoop-Hdfs-0.23-Build - Build # 700 - Still Failing

2013-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/700/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7866 lines...]
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3313,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenk