Which Hadoop version is preferable to make changes in HDFS??

2013-04-10 Thread Mohammad Mustaqeem
I want to make changes in replica placement.
Which Hadoop version is preferable to make changes in HDFS??
Please help...

-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270


Which java file is responsible for replica placement??

2013-04-10 Thread Mohammad Mustaqeem
Which class is responsible for replica placement??
Which class chooses the random rack for placement of 3rd replica?
and which class chooses the random datanode from the same rack for
placement of  ??
2nd replica???

-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad


Re: testHDFSConf.xml

2013-04-10 Thread Konstantin Boudnik
I have split CLI test infrastructure into hierarchical pieces that allow to
have different configurations for different components. E.g. you can have one
for YARN that would exist independently of HDFS, etc. The change has been in
since like 0.22 and committed to 0.203.x as well IIRC, hence should be usable
across the Hadoop versions. May be it something you would benefit from.

Cos

On Wed, Apr 10, 2013 at 10:43AM, Colin McCabe wrote:
> On Wed, Apr 10, 2013 at 10:16 AM, Jay Vyas  wrote:
> 
> > Hello HDFS brethren !
> >
> > I've noticed that the testHDFSConf.xml has alot of references to
> > supergroup.
> >
> >
> > https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
> >
> > 1) I wonder why this is hardcoded in the testHDFSConf.xml
> >
> >
> "supergroup" is the default supergroup in HDFS.  Check DFSConfigKeys.java:
> 
>   public static final String  DFS_PERMISSIONS_SUPERUSERGROUP_KEY =
> "dfs.permissions.superusergroup";
>   public static final String  DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT =
> "supergroup";
> 
> It seems fine to use "supergroup" in a test.  after all, we do control the
> configuration we pass into the test.
> 
> 
> > 2) Also, Im wondering if there are any good ideas for extending/modifying
> > this file for a extention of the FileSystem implementation.
> >
> >
> It would be interesting to think about pulling the non-hdfs-specific
> components of TestHDFSCLI into another test; perhaps one in common.
>  Theoretically, what we print on the console should be really similar, no
> matter whether HDFS or some other filesystem is being used.  In practice,
> there may be some differences, however...
> 
> I find it a little bit challenging to modify TestHDFSCLI because the test
> is really long and executes as a single unit.  Breaking it down into
> multiple units would probably be another good improvement, at least in my
> opinion.
> 
> best,
> Colin
> 
> 
> Right  now im doing some global find replace statements - but was thinking
> > that maybe parameterizing the file would be a good JIRA - so that people
> > could use this as a base test for FileSystem implementations
> >
> > Depending on feedback im certainly willing to submit and put in a first
> > pass at a more modular version of this file.
> >
> > Its in many ways a very generalizable component of the hdfs trunk.
> >
> > Thanks!
> > --
> > Jay Vyas
> > http://jayunit100.blogspot.com
> >


signature.asc
Description: Digital signature


Re: VOTE: HDFS-347 merge

2013-04-10 Thread Aaron T. Myers
I'm +1 as well. I've reviewed much of the code as well and have personally seen 
it running in production at several different sites. I agree with Todd that 
it's a substantial improvement in operability. 

Best,
Aaron

On Apr 8, 2013, at 1:19 PM, Todd Lipcon  wrote:

> +1 for the branch merge. I've reviewed all of the code in the branch, and
> we have people now running this code in production scenarios. It is as
> functional as the old version and way easier to set up/configure.
> 
> -Todd
> 
> On Mon, Apr 1, 2013 at 4:32 PM, Colin McCabe  wrote:
> 
>> Hi all,
>> 
>> I think it's time to merge the HDFS-347 branch back to trunk.  It's been
>> under
>> review and testing for several months, and provides both a performance
>> advantage, and the ability to use short-circuit local reads without
>> compromising system security.
>> 
>> Previously, we tried to merge this and the objection was brought up that we
>> should keep the old, insecure short-circuit local reads around so that
>> platforms for which secure SCR had not yet been implemented could use it
>> (e.g. Windows).  This has been addressed-- see HDFS-4538 for details.
>> Suresh has also volunteered to maintain the insecure SCR code until secure
>> SCR can be implemented for Windows.
>> 
>> Please cast your vote by EOD Monday 4/8.
>> 
>> best,
>> Colin
>> 
> 
> 
> 
> -- 
> Todd Lipcon
> Software Engineer, Cloudera


Hurray for NN sanity! Even Pre 2.x!

2013-04-10 Thread Chris Embree
I work for a largish healthcare co.  We finally started using (exploiting?)
Hadoop this year.

The day before our big, C-Suite sponsored launch of Hadoop celebration, we
realized that we could no longer ssh to our NN.   Fail of all fails!  Sorta.

Nothing was really wrong!  NN was up, running, and servicing all requests.
 Nagios and Ganglia were both satisfied with the replies from NN.  And yet,
I still had no access. Boo!  This was purely a Linux issue.

Being one to avoid "Game Day" failures, I decided to nuke the badly
behaving NN.  PLEASE NOTE! I had the right opportunity.  We had few
activities in Prod.  I knew what was up with NN and I had a high level of
confidence that things were working NORMALLY on NN regarding HDFS.

So here is the slightly  intoxicated order of events (I celebrated after
success)
NN always wrote to 2x Local NN dirs plus 1 NFS
I made sure all DN's were stopped
Our still running NN was Nuked -- AKA Power switch - No Mercy, Pwr Button
Shutdown.!!!
Then I cp'd all NFS NN info to my  replacement NN (Not 2nd NN or SNN) local
FS x2.
I killed the in_use file as req'd
I started NN Services on the new NN
I restarted all DN Services on all DN's

Somewhere along the line I altered the IP Addr to ensure that my new PNN
was the one everyone was looking for.  I specifically did NOT alter Name or
IP within Any  HDFD configs.

This all happened in < 30  mins.  It was easy.  it was testable.

Full credit to the Dev's who built PNN and HDFS... Even at 1.2x levels. ;)

We all want federated NN services, but until then, at least there is
recovery if you have NFS.. and abnormal sanity. ;)


[jira] [Resolved] (HDFS-4654) FileNotFoundException: ID mismatch

2013-04-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-4654.
--

Resolution: Not A Problem
  Assignee: Brandon Li

Given HDFS-4339 has been committed. This JIRA is not a problem any more.

> FileNotFoundException: ID mismatch
> --
>
> Key: HDFS-4654
> URL: https://issues.apache.org/jira/browse/HDFS-4654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Fengdong Yu
>Assignee: Brandon Li
> Fix For: 3.0.0
>
>
> Mu cluster was build from source code trunk r1463074.
> I got an exception as follows when I put a file to the HDFS.
> 13/04/01 09:33:45 WARN retry.RetryInvocationHandler: Exception while invoking 
> addBlock of class ClientNamenodeProtocolTranslatorPB. Trying to fail over 
> immediately.
> 13/04/01 09:33:45 WARN hdfs.DFSClient: DataStreamer Exception
> java.io.FileNotFoundException: ID mismatch. Request id and saved id: 1073 , 
> 1050
> at org.apache.hadoop.hdfs.server.namenode.INodeId.checkId(INodeId.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2501)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2298)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2212)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:498)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40979)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:526)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1018)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1818)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1814)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1812)
> please reproduce as :
> hdfs dfs -put test.data  /user/data/test.data
> after this command start to run, then kill active name node process.
> I have only three nodes(A,B,C) for test
> A and B are name nodes.
> B and C are data nodes.
> ZK deployed on A, B and C.
> A, B and C are all journal nodes.
> Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.0.4-alpha

2013-04-10 Thread Azuryy Yu
Sorry, don't include HDFS-4339, because
HDFS-4334 is
fixed for 3.0.0.


On Thu, Apr 11, 2013 at 9:20 AM, Azuryy Yu  wrote:

> Hi,
>
> HDFS-4339 should be included, otherwise, HDFS cannot failover
> automatically.
>
>
> On Thu, Apr 11, 2013 at 12:11 AM, Sangjin Lee  wrote:
>
>> Hi Arun,
>>
>> Would it be possible to include HADOOP-9407 in the release? It's been
>> resolved for a while, and it'd be good to have this dependency problem
>> squared away... Thanks for the consideration!
>>
>> https://issues.apache.org/jira/browse/HADOOP-9407
>>
>> Regards,
>> Sangjin
>>
>>
>> On Wed, Apr 10, 2013 at 2:19 AM, Arun Murthy  wrote:
>>
>> > Ok, I'll spin rc1 after. Thanks.
>> >
>> > Sent from my iPhone
>> >
>> > On Apr 10, 2013, at 11:44 AM, Siddharth Seth 
>> > wrote:
>> >
>> > > Arun, MAPREDUCE-5094 would be a useful jira to include in the
>> 2.0.4-alpha
>> > > release. It's not an absolute blocker since the values can be
>> controlled
>> > > explicitly by changing tests which use the cluster.
>> > >
>> > > Thanks
>> > > - Sid
>> > >
>> > >
>> > > On Tue, Apr 9, 2013 at 8:39 PM, Arun C Murthy 
>> > wrote:
>> > >
>> > >> Folks,
>> > >>
>> > >> I've created a release candidate (rc0) for hadoop-2.0.4-alpha that I
>> > would
>> > >> like to release.
>> > >>
>> > >> This is a bug-fix release which solves a number of issues discovered
>> > >> during integration testing of the full-stack.
>> > >>
>> > >> The RC is available at:
>> > >> http://people.apache.org/~acmurthy/hadoop-2.0.4-alpha-rc0/
>> > >> The RC tag in svn is here:
>> > >>
>> >
>> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4-alpha-rc0
>> > >>
>> > >> The maven artifacts are available via repository.apache.org.
>> > >>
>> > >> Please try the release and vote; the vote will run for the usual 7
>> days.
>> > >>
>> > >> thanks,
>> > >> Arun
>> > >>
>> > >> P.S. Many thanks are in order - Roman/Cos and rest of BigTop
>> community
>> > for
>> > >> helping to find a number of integration issues, Ted Yu for
>> > co-ordinating on
>> > >> HBase, Alejandro for co-ordinating on Oozie,
>> > Vinod/Sid/Alejandro/Xuan/Daryn
>> > >> and rest of devs for quickly jumping and fixing these.
>> > >>
>> > >>
>> > >> --
>> > >> Arun C. Murthy
>> > >> Hortonworks Inc.
>> > >> http://hortonworks.com/
>> > >>
>> > >>
>> > >>
>> >
>>
>
>


Re: [VOTE] Release Apache Hadoop 2.0.4-alpha

2013-04-10 Thread Azuryy Yu
Hi,

HDFS-4339 should be included, otherwise, HDFS cannot failover automatically.


On Thu, Apr 11, 2013 at 12:11 AM, Sangjin Lee  wrote:

> Hi Arun,
>
> Would it be possible to include HADOOP-9407 in the release? It's been
> resolved for a while, and it'd be good to have this dependency problem
> squared away... Thanks for the consideration!
>
> https://issues.apache.org/jira/browse/HADOOP-9407
>
> Regards,
> Sangjin
>
>
> On Wed, Apr 10, 2013 at 2:19 AM, Arun Murthy  wrote:
>
> > Ok, I'll spin rc1 after. Thanks.
> >
> > Sent from my iPhone
> >
> > On Apr 10, 2013, at 11:44 AM, Siddharth Seth 
> > wrote:
> >
> > > Arun, MAPREDUCE-5094 would be a useful jira to include in the
> 2.0.4-alpha
> > > release. It's not an absolute blocker since the values can be
> controlled
> > > explicitly by changing tests which use the cluster.
> > >
> > > Thanks
> > > - Sid
> > >
> > >
> > > On Tue, Apr 9, 2013 at 8:39 PM, Arun C Murthy 
> > wrote:
> > >
> > >> Folks,
> > >>
> > >> I've created a release candidate (rc0) for hadoop-2.0.4-alpha that I
> > would
> > >> like to release.
> > >>
> > >> This is a bug-fix release which solves a number of issues discovered
> > >> during integration testing of the full-stack.
> > >>
> > >> The RC is available at:
> > >> http://people.apache.org/~acmurthy/hadoop-2.0.4-alpha-rc0/
> > >> The RC tag in svn is here:
> > >>
> >
> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4-alpha-rc0
> > >>
> > >> The maven artifacts are available via repository.apache.org.
> > >>
> > >> Please try the release and vote; the vote will run for the usual 7
> days.
> > >>
> > >> thanks,
> > >> Arun
> > >>
> > >> P.S. Many thanks are in order - Roman/Cos and rest of BigTop community
> > for
> > >> helping to find a number of integration issues, Ted Yu for
> > co-ordinating on
> > >> HBase, Alejandro for co-ordinating on Oozie,
> > Vinod/Sid/Alejandro/Xuan/Daryn
> > >> and rest of devs for quickly jumping and fixing these.
> > >>
> > >>
> > >> --
> > >> Arun C. Murthy
> > >> Hortonworks Inc.
> > >> http://hortonworks.com/
> > >>
> > >>
> > >>
> >
>


[jira] [Resolved] (HDFS-4684) Snapshot: Use INode id for image serialization

2013-04-10 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4684.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)
 Hadoop Flags: Reviewed

Thanks Jing for reviewing the patch.

I have committed this.

> Snapshot: Use INode id for image serialization
> --
>
> Key: HDFS-4684
> URL: https://issues.apache.org/jira/browse/HDFS-4684
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: h4684_20130410b.patch, h4684_20130410.patch
>
>
> Since HDFS-4339 was committed, let's use id in ReferenceMap instead of 
> generating reference IDs.  Generating reference ids actually does not work 
> well when writing image with multiple threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4684) Snapshot: Use INode id for image serialization

2013-04-10 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4684:


 Summary: Snapshot: Use INode id for image serialization
 Key: HDFS-4684
 URL: https://issues.apache.org/jira/browse/HDFS-4684
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


Since HDFS-4339 was committed, let's use id in ReferenceMap instead of 
generating reference IDs.  Generating reference ids actually does not work well 
when writing image with multiple threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4673) Renaming file in subdirectory of a snapshotted directory does not work.

2013-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-4673.
-

Resolution: Duplicate

I am resolving as a duplicate of HDFS-4675. I just verified it is fixed by the 
patch for that issue.

> Renaming file in subdirectory of a snapshotted directory does not work.
> ---
>
> Key: HDFS-4673
> URL: https://issues.apache.org/jira/browse/HDFS-4673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4673.2.patch, HDFS-4673.patch
>
>
> Steps to repro:
> # mkdir /1
> # Allow snapshot on /1
> # mkdir /1/2
> # Put file /1/2/f1
> # Take snapshot snap1 of /1
> # Rename /1/2/f1 to /1/2/f2
> Fails with exception in INodeDirectory.replaceSelf
> {code}
>   Preconditions.checkArgument(parent != null, "parent is null, this=%s", 
> this);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4683) Per directory trash settings / trash override

2013-04-10 Thread John Vines (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Vines resolved HDFS-4683.
--

Resolution: Invalid

I could have sworn that was the first thing I tested. Thank you for the quick 
reply!

> Per directory trash settings / trash override
> -
>
> Key: HDFS-4683
> URL: https://issues.apache.org/jira/browse/HDFS-4683
> Project: Hadoop HDFS
>  Issue Type: Improvement
> Environment: Hadoop2
>Reporter: John Vines
>
> With the migration of trash settings to server side, it becomes more 
> complicated for applications built on top of HDFS to properly deal with their 
> trash. Applications like HBase and Accumulo already have a fair amount of 
> trash management, adding the HDFS Trash will simply put more stress on DFS. 
> But fully disabling the trash is overkill, as there still may be use for it 
> in other uses of hadoop.
> I would like to request either:
> A. per directory or user trash settings, so that applications which work in a 
> specific directory or use a specific user can continue to ignore the trash.
> B. An updated DistributedFileSystem delete() call which allows you to force 
> ignoring the trash. I'm not sure how feasible this is due to the FileSystem 
> API, but it may be possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: testHDFSConf.xml

2013-04-10 Thread Colin McCabe
On Wed, Apr 10, 2013 at 10:16 AM, Jay Vyas  wrote:

> Hello HDFS brethren !
>
> I've noticed that the testHDFSConf.xml has alot of references to
> supergroup.
>
>
> https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
>
> 1) I wonder why this is hardcoded in the testHDFSConf.xml
>
>
"supergroup" is the default supergroup in HDFS.  Check DFSConfigKeys.java:

  public static final String  DFS_PERMISSIONS_SUPERUSERGROUP_KEY =
"dfs.permissions.superusergroup";
  public static final String  DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT =
"supergroup";

It seems fine to use "supergroup" in a test.  after all, we do control the
configuration we pass into the test.


> 2) Also, Im wondering if there are any good ideas for extending/modifying
> this file for a extention of the FileSystem implementation.
>
>
It would be interesting to think about pulling the non-hdfs-specific
components of TestHDFSCLI into another test; perhaps one in common.
 Theoretically, what we print on the console should be really similar, no
matter whether HDFS or some other filesystem is being used.  In practice,
there may be some differences, however...

I find it a little bit challenging to modify TestHDFSCLI because the test
is really long and executes as a single unit.  Breaking it down into
multiple units would probably be another good improvement, at least in my
opinion.

best,
Colin


Right  now im doing some global find replace statements - but was thinking
> that maybe parameterizing the file would be a good JIRA - so that people
> could use this as a base test for FileSystem implementations
>
> Depending on feedback im certainly willing to submit and put in a first
> pass at a more modular version of this file.
>
> Its in many ways a very generalizable component of the hdfs trunk.
>
> Thanks!
> --
> Jay Vyas
> http://jayunit100.blogspot.com
>


testHDFSConf.xml

2013-04-10 Thread Jay Vyas
Hello HDFS brethren !

I've noticed that the testHDFSConf.xml has alot of references to
supergroup.

https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml

1) I wonder why this is hardcoded in the testHDFSConf.xml

2) Also, Im wondering if there are any good ideas for extending/modifying
this file for a extention of the FileSystem implementation.

Right  now im doing some global find replace statements - but was thinking
that maybe parameterizing the file would be a good JIRA - so that people
could use this as a base test for FileSystem implementations

Depending on feedback im certainly willing to submit and put in a first
pass at a more modular version of this file.

Its in many ways a very generalizable component of the hdfs trunk.

Thanks!
-- 
Jay Vyas
http://jayunit100.blogspot.com


[jira] [Created] (HDFS-4683) Per directory trash settings / trash override

2013-04-10 Thread John Vines (JIRA)
John Vines created HDFS-4683:


 Summary: Per directory trash settings / trash override
 Key: HDFS-4683
 URL: https://issues.apache.org/jira/browse/HDFS-4683
 Project: Hadoop HDFS
  Issue Type: Improvement
 Environment: Hadoop2
Reporter: John Vines


With the migration of trash settings to server side, it becomes more 
complicated for applications built on top of HDFS to properly deal with their 
trash. Applications like HBase and Accumulo already have a fair amount of trash 
management, adding the HDFS Trash will simply put more stress on DFS. But fully 
disabling the trash is overkill, as there still may be use for it in other uses 
of hadoop.

I would like to request either:
A. per directory or user trash settings, so that applications which work in a 
specific directory or use a specific user can continue to ignore the trash.
B. An updated DistributedFileSystem delete() call which allows you to force 
ignoring the trash. I'm not sure how feasible this is due to the FileSystem 
API, but it may be possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Hdfs-trunk #1368

2013-04-10 Thread Apache Jenkins Server
See 

Changes:

[llu] HADOOP-9467. Metrics2 record filter should check name as well as tags. 
(Ganeshan Iyler via llu)

[suresh] HADOOP-9437. TestNativeIO#testRenameTo fails on Windows due to 
assumption that POSIX errno is embedded in NativeIOException. Contributed by 
Chris Nauroth.

[todd] HDFS-4643. Fix flakiness in TestQuorumJournalManager. Contributed by 
Todd Lipcon.

[vinodkv] YARN-534. Change RM restart recovery to also account for AM 
max-attempts configuration after the restart. Contributed by Jian He.

[vinodkv] YARN-112. Fixed a race condition during localization that fails 
containers. Contributed by Omkar Vinit Joshi.
MAPREDUCE-5138. Fix LocalDistributedCacheManager after YARN-112. Contributed by 
Omkar Vinit Joshi.

[suresh] HDFS-4669. TestBlockPoolManager fails using IBM java. Contributed by 
Tian Hong Wang.

[suresh] HDFS-4674. TestBPOfferService fails on Windows due to failure parsing 
datanode data directory as URI. Contributed by Chris Nauroth.

[suresh] HDFS-4676. TestHDFSFileSystemContract should set MiniDFSCluster 
variable to null to free up memory. Contributed by Suresh Srinivas.

--
[...truncated 14019 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


Hadoop-Hdfs-trunk - Build # 1368 - Still Failing

2013-04-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1368/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14212 lines...]
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.674 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.071 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.672 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:28:40.476s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:25.964s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [57.400s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:32:04.616s
[INFO] Finished at: Wed Apr 10 13:05:54 UTC 2013
[INFO] Final Memory: 54M/813M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4674
Updating HDFS-4643
Updating MAPREDUCE-5138
Updating YARN-112
Updating HADOOP-9437
Updating HDFS-4676
Updating HDFS-4669
Updating YARN-534
Updating HADOOP-9467
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

for subscription

2013-04-10 Thread Mohammad Mustaqeem
-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad


[jira] [Resolved] (HDFS-4682) getting error while configuring the hadoop

2013-04-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-4682.
--

Resolution: Invalid

closing as invalid, see http://wiki.apache.org/hadoop/InvalidJiraIssues

> getting error while configuring the hadoop
> --
>
> Key: HDFS-4682
> URL: https://issues.apache.org/jira/browse/HDFS-4682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.0.4
> Environment: windows..
>Reporter: murali
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> i try to configuring hadoop in windows but i am getting error ... i followed 
> the address .
> http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4682) getting error while configuring the hadoop

2013-04-10 Thread murali (JIRA)
murali created HDFS-4682:


 Summary: getting error while configuring the hadoop
 Key: HDFS-4682
 URL: https://issues.apache.org/jira/browse/HDFS-4682
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4
 Environment: windows..
Reporter: murali


i try to configuring hadoop in windows but i am getting error ... i followed 
the address .
http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.0.4-alpha

2013-04-10 Thread Arun Murthy
Ok, I'll spin rc1 after. Thanks.

Sent from my iPhone

On Apr 10, 2013, at 11:44 AM, Siddharth Seth  wrote:

> Arun, MAPREDUCE-5094 would be a useful jira to include in the 2.0.4-alpha
> release. It's not an absolute blocker since the values can be controlled
> explicitly by changing tests which use the cluster.
>
> Thanks
> - Sid
>
>
> On Tue, Apr 9, 2013 at 8:39 PM, Arun C Murthy  wrote:
>
>> Folks,
>>
>> I've created a release candidate (rc0) for hadoop-2.0.4-alpha that I would
>> like to release.
>>
>> This is a bug-fix release which solves a number of issues discovered
>> during integration testing of the full-stack.
>>
>> The RC is available at:
>> http://people.apache.org/~acmurthy/hadoop-2.0.4-alpha-rc0/
>> The RC tag in svn is here:
>> http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.4-alpha-rc0
>>
>> The maven artifacts are available via repository.apache.org.
>>
>> Please try the release and vote; the vote will run for the usual 7 days.
>>
>> thanks,
>> Arun
>>
>> P.S. Many thanks are in order - Roman/Cos and rest of BigTop community for
>> helping to find a number of integration issues, Ted Yu for co-ordinating on
>> HBase, Alejandro for co-ordinating on Oozie, Vinod/Sid/Alejandro/Xuan/Daryn
>> and rest of devs for quickly jumping and fixing these.
>>
>>
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/
>>
>>
>>


[jira] [Created] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2013-04-10 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4681:


 Summary: 
TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
using IBM java
 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
 Fix For: 2.0.3-alpha


TestBlocksWithNotEnoughRacks unit test fails with the following error message:

testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
  Time elapsed: 8997 sec  <<< FAILURE!
org.junit.ComparisonFailure: Corrupt replica 
expected:<...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>���":)$�{|�^@�-���|GvW��7g
 �/M��[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���   
oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�&G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C]> but 
was:<...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�<02>�":)$�{|�^@�-���|GvW��7g
 �/M�[U!eF�>^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^O"c���  
oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
   ��6b�S�&G�^?��m4FW#^@
D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]>
at org.junit.Assert.assertEquals(Assert.java:123)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)


The root cause is that the unit test code uses in.read() method to read the 
block content char by char., which will abandon the LF. So the best way is to 
use buffer to read the block content.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira