[jira] [Created] (HDFS-6522) DN will try to append to non-existent replica if the datanode has out-dated block

2014-06-11 Thread stanley shi (JIRA)
stanley shi created HDFS-6522:
-

 Summary: DN will try to append to non-existent replica if the 
datanode has out-dated block
 Key: HDFS-6522
 URL: https://issues.apache.org/jira/browse/HDFS-6522
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: stanley shi


My environment: HA cluster with 4 datanodes;

Here're the steps to reproduce:
1. put one file (one block) to hdfs with repl=3; assume dn1, dn2, dn3 has block 
for this file; dn4 don't have the block;
2. stop the dn1;
4. append content to the file 100 times;
5. close dn1 and start dn4;
6. append content to the file 100 times again;
Check the datanode log on dn1, many of this log will show {quote}
2014-06-12 12:07:04,442 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
opWriteBlock BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304 
received exception org.apache.hadoop.hdfs.s
erver.datanode.ReplicaNotFoundException: Cannot append to a non-existent 
replica BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
2014-06-12 12:07:04,442 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
hdsh145.lss.emc.com:50010:DataXceiver error processing WRITE_BLOCK operation  
src: /10.37.7.146:55594 dest: /10.37.7
.145:50010
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot append 
to a non-existent replica 
BP-1649188734-10.37.7.142-1398844098971:blk_1073742928_61304
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:392)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:527)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:174)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:454)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
at java.lang.Thread.run(Thread.java:722)
{quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6471) Make moveFromLocal CLI testcases to be non-disruptive

2014-06-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HDFS-6471.
--

  Resolution: Fixed
Release Note: Committed to trunk and merged into branch-2. Thanks Dasha!

> Make moveFromLocal CLI testcases to be non-disruptive
> -
>
> Key: HDFS-6471
> URL: https://issues.apache.org/jira/browse/HDFS-6471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Dasha Boudnik
>Assignee: Dasha Boudnik
> Fix For: 2.5.0
>
> Attachments: HDFS-6471.patch, HDFS-6471.patch
>
>
> MoveFromLocal tests at the end of TestCLI are disruptive: the original files 
> data15bytes and data30bytes are moved from the local directory to HDFS. 
> Subsequent tests using these files crash.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6521) Improve the readability of 'hadoop fs -help'

2014-06-11 Thread Lei Xu (JIRA)
Lei Xu created HDFS-6521:


 Summary: Improve the readability of 'hadoop fs -help'
 Key: HDFS-6521
 URL: https://issues.apache.org/jira/browse/HDFS-6521
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.5.0
Reporter: Lei Xu
Assignee: Lei Xu
Priority: Minor
 Fix For: 2.5.0


'hadoop fs -help` displays help informations with numbers of different formats. 

This patch borrows the format used in `hdfs cacheadmin -help`: all options are 
formatted by using org.apache.hadoop.tools.TableListing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6520) Failed to run fsck -move

2014-06-11 Thread Shengjun Xin (JIRA)
Shengjun Xin created HDFS-6520:
--

 Summary: Failed to run fsck -move
 Key: HDFS-6520
 URL: https://issues.apache.org/jira/browse/HDFS-6520
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Shengjun Xin


I met some error when I run fsck -move.
My steps are as the following:
1. Set up a pseudo cluster
2. Copy a file to hdfs
3. Corrupt a block of the file
4. Run fsck to check:
{code}
Connecting to namenode via http://localhost:50070
FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at 
Wed Jun 11 15:58:38 CST 2014
.
/user/hadoop/fsck-test: CORRUPT blockpool BP-654596295-10.37.7.84-1402466764642 
block blk_1073741825

/user/hadoop/fsck-test: MISSING 1 blocks of total size 1048576 B.Status: CORRUPT
 Total size:4104304 B
 Total dirs:1
 Total files:   1
 Total symlinks:0
 Total blocks (validated):  4 (avg. block size 1026076 B)
  
  CORRUPT FILES:1
  MISSING BLOCKS:   1
  MISSING SIZE: 1048576 B
  CORRUPT BLOCKS:   1
  
 Minimally replicated blocks:   3 (75.0 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:1
 Average block replication: 0.75
 Corrupt blocks:1
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  1
 Number of racks:   1
FSCK ended at Wed Jun 11 15:58:38 CST 2014 in 1 milliseconds


The filesystem under path '/user/hadoop' is CORRUPT
{code}
5. Run fsck -move to move the corrupted file to /lost+found and the error 
message in the namenode log:
{code}
2014-06-11 15:48:16,686 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at 
Wed Jun 11 15:48:16 CST 2014
2014-06-11 15:48:16,894 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 35 Total time for transactions(ms): 9 Number of 
transactions batched in Syncs: 0 Number of syncs: 25 SyncTimes(ms): 73
2014-06-11 15:48:16,991 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Error reading block
java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader 
with packetLen=66048 header data: offsetInBlock: 65536
seqno: 1
lastPacketInBlock: false
dataLen: 65536

at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readTrailingEmptyPacket(RemoteBlockReader2.java:259)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:220)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:138)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:649)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:543)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:324)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:233)
at 
org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at 
org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1192)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppCont

[jira] [Created] (HDFS-6519) Document oiv_legacy command

2014-06-11 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6519:
---

 Summary: Document oiv_legacy command
 Key: HDFS-6519
 URL: https://issues.apache.org/jira/browse/HDFS-6519
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.5.0
Reporter: Akira AJISAKA


HDFS-6293 introduced oiv_legacy command.
The usage of the command should be included in OfflineImageViewer.apt.vm.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


RE: [Vote] Merge The HDFS XAttrs Feature Branch (HDFS-2006) to Trunk

2014-06-11 Thread Gangumalla, Uma
I have merged this feature to Branch-2 now. 

>From now onwards if any issues related to Xattrs, please merge them to 
>branch-2 if needed.

I will merge the remaining jiras tomorrow which are related to Xattrs feature 
but handled as top level Jiras. Ex: DistCp(MAPREDUCE-5898) support etc

Regards,
Uma

-Original Message-
From: Gangumalla, Uma [mailto:uma.ganguma...@intel.com] 
Sent: Wednesday, May 21, 2014 8:06 PM
To: hdfs-dev@hadoop.apache.org
Subject: RE: [Vote] Merge The HDFS XAttrs Feature Branch (HDFS-2006) to Trunk

Thanks a lot, for the great work on branch and support.
I have just completed the merge of HDFS Extended attributes branch(HDFS-2006)  
to trunk.

Regards,
Uma

-Original Message-
From: Gangumalla, Uma [mailto:uma.ganguma...@intel.com] 
Sent: Wednesday, May 21, 2014 6:38 PM
To: hdfs-dev@hadoop.apache.org
Subject: RE: [Vote] Merge The HDFS XAttrs Feature Branch (HDFS-2006) to Trunk

Thanks a lot for participating in this vote.

With 4  +1's( from Me, Andrew Wang, Chris and Colin) and no -1, the vote has 
passed for the merge.

I will do the merge shortly to trunk.

Regards,
Uma

-Original Message-
From: Gangumalla, Uma [mailto:uma.ganguma...@intel.com] 
Sent: Wednesday, May 14, 2014 6:17 PM
To: hdfs-dev@hadoop.apache.org
Subject: [Vote] Merge The HDFS XAttrs Feature Branch (HDFS-2006) to Trunk

Hello HDFS Devs,
  I would like to call for a vote to merge the HDFS Extended Attributes 
(XAttrs) feature from the HDFS-2006 branch to the trunk.
  XAttrs are already widely supported on many operating systems, including 
Linux, Windows, and Mac OS. This will allow storing attributes for HDFS 
file/directory.
  XAttr consist of a name and a value and exist in one of 4 namespaces: user, 
trusted, security, and system. An XAttr name is prefixed with one of these 
namespaces, so for example, "user.myxattr".
  Consistent with ongoing awareness of Namenode memory usage, the maximum 
number and size of XAttrs on a file/directory are limited by a configuration 
parameter.
  The design document contains more details and can be found here: 
https://issues.apache.org/jira/secure/attachment/12644341/HDFS-XAttrs-Design-3.pdf
  Development of this feature has been tracked in JIRA HDFS-2006: 
https://issues.apache.org/jira/browse/HDFS-2006
  All of the development work for the feature is contained in the "HDFS-2006" 
branch: https://svn.apache.org/repos/asf/hadoop/common/branches/HDFS-2006
 As last tasks, we are working to support XAttrs via libhdfs, webhdfs as well 
as other minor improvements.
  We intend to finish those enhancements before the vote completes and 
otherwise we could move them to top-level JIRAs as they can be tracked 
independently. User document is also ready for this feature.
  Here the doc attached in JIRA:  
https://issues.apache.org/jira/secure/attachment/12644787/ExtendedAttributes.html
 The XAttrs feature is backwards-compatible and enabled by default. A cluster 
administrator can disable it.
Testing:
 We've developed more than 70 new tests which cover the XAttrs get, set and 
remove APIs through DistributedFileSystem and WebHdfsFileSystem, the new XAttr 
CLI commands, HA, XAttr persistence in the fsimage and related.
  Additional  testing plans are documented in: 
https://issues.apache.org/jira/secure/attachment/12644342/Test-Plan-for-Extended-Attributes-1.pdf
  Thanks a lot to the contributors who have helped and participated in the 
branch development.
  Code contributors are Yi Liu, Charles Lamb, Andrew Wang and Uma Maheswara Rao 
G.
 The design document incorporates feedback from many community members: Chris 
Nauroth, Andrew Purtell, Tianyou Li, Avik Dey, Charles Lamb, Alejandro, Andrew 
Wang, Tsz Wo Nicholas Sze and Uma Maheswara Rao G.
 Code reviewers on individual patches include Chris Nauroth, Alejandro, Andrew 
Wang, Charles Lamb, Tsz Wo Nicholas Sze and Uma Maheswara Rao G.

  Also thanks to Dhruba for bringing up this JIRA and thanks to others who 
participated for discussions.
This vote will run for a week and close on 5/21/2014 at 06:16 pm IST.

Here is my +1 to start with.
Regards,
Uma
(umamah...@apache.org)





RE: hadoop-2.5 - June end?

2014-06-11 Thread Gangumalla, Uma
Yes. Suresh.

I have merged HDFS-2006 (Extended Attributes) to branch-2. So, that it will be 
included in 2.5 release.

Regards,
Uma

-Original Message-
From: Suresh Srinivas [mailto:sur...@hortonworks.com] 
Sent: Tuesday, June 10, 2014 10:15 PM
To: mapreduce-...@hadoop.apache.org
Cc: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: hadoop-2.5 - June end?

We should also include extended attributes feature for HDFS from HDFS-2006 for 
release 2.5.


On Mon, Jun 9, 2014 at 9:39 AM, Arun C Murthy  wrote:

> Folks,
>
>  As you can see from the Roadmap wiki, it looks like several items are 
> still a bit away from being ready.
>
>  I think rather than wait for them, it will be useful to create an 
> intermediate release (2.5) this month - I think ATS security is pretty 
> close, so we can ship that. I'm thinking of creating hadoop-2.5 by end 
> of the month, with a branch a couple of weeks prior.
>
>  Thoughts?
>
> thanks,
> Arun
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or 
> entity to which it is addressed and may contain information that is 
> confidential, privileged and exempt from disclosure under applicable 
> law. If the reader of this message is not the intended recipient, you 
> are hereby notified that any printing, copying, dissemination, 
> distribution, disclosure or forwarding of this communication is 
> strictly prohibited. If you have received this communication in error, 
> please contact the sender immediately and delete it from your system. Thank 
> You.
>



--
http://hortonworks.com/download/

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.


[jira] [Created] (HDFS-6518) TestCacheDirectives#testExceedsCapacity fails intermittently

2014-06-11 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-6518:
---

 Summary: TestCacheDirectives#testExceedsCapacity fails 
intermittently
 Key: HDFS-6518
 URL: https://issues.apache.org/jira/browse/HDFS-6518
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang


Observed from 

https://builds.apache.org/job/PreCommit-HDFS-Build/7080//testReport/

Test 
org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testExceedsCapacity
fails intermittently
{code}
Failing for the past 1 build (Since Failed#7080 )
Took 7.3 sec.
Stacktrace

java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.checkPendingCachedEmpty(TestCacheDirectives.java:1416)
at 
org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testExceedsCapacity(TestCacheDirectives.java:1437)
{code}

A second run with the same code is successful,

https://builds.apache.org/job/PreCommit-HDFS-Build/7082//testReport/

Running it locally is also successful.

 HDFS-6257 mentioned about possible race, maybe the issue is still there.

Thanks.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6517) Update hadoop-metrics2.properties examples to Yarn

2014-06-11 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6517:
---

 Summary: Update hadoop-metrics2.properties examples to Yarn
 Key: HDFS-6517
 URL: https://issues.apache.org/jira/browse/HDFS-6517
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA


HDFS-side of HADOOP-9919.
HADOOP-9919 updated hadoop-metrics2.properties examples to YARN, however, the 
examples are still old because hadoop-metrics2.properties in HDFS project is 
actually packaged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6516) Implement List Encryption Zones

2014-06-11 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-6516:
--

 Summary: Implement List Encryption Zones
 Key: HDFS-6516
 URL: https://issues.apache.org/jira/browse/HDFS-6516
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Charles Lamb
Assignee: Charles Lamb


The list Encryption Zones command (CLI) and backend implementation 
(FSNamesystem) needs to be implemented. As part of this, the tests in 
TestEncryptionZonesAPI should be updated to use that to validate the results of 
the various CreateEZ and DeleteEZ tests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Jenkins build is back to normal : Hadoop-Hdfs-trunk #1771

2014-06-11 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-6515) testPageRounder (org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache)

2014-06-11 Thread Tony Reix (JIRA)
Tony Reix created HDFS-6515:
---

 Summary: testPageRounder   
(org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache)
 Key: HDFS-6515
 URL: https://issues.apache.org/jira/browse/HDFS-6515
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
 Environment: Linux on PPC64
Reporter: Tony Reix
Priority: Blocker


I have an issue with test :
   testPageRounder
  (org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache)
on Linux/PowerPC.

On Linux/Intel, test runs fine.

On Linux/PowerPC, I have:
testPageRounder(org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache)  
Time elapsed: 64.037 sec  <<< ERROR!
java.lang.Exception: test timed out after 6 milliseconds

Looking at details, I see that some "Failed to cache " messages appear in the 
traces. Only 10 on Intel, but 186 on PPC64.

On PPC64, it looks like some thread is waiting for something that never 
happens, generating a TimeOut.

I'm now using IBM JVM, however I've just checked that the issue also appears 
with OpenJDK.

I'm now using Hadoop latest, however, the issue appeared within Hadoop 2.4.0 .

I need help for understanding what the test is doing, what traces are expected, 
in order to understand what/where is the root cause.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6514) Add MirrorJournal ( server side implementation) to handle the journals from Primary cluster

2014-06-11 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-6514:
---

 Summary: Add MirrorJournal ( server side implementation) to handle 
the journals from Primary cluster
 Key: HDFS-6514
 URL: https://issues.apache.org/jira/browse/HDFS-6514
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Vinayakumar B


This targets the serverside implementation of handling journals sent by the 
primary cluster via MirrorJournalManager.

This service will be launched in mirror cluster's active namenode. It receives 
the journal RPC requests from MirrorJournalManager and processes edits and 
writes it to local shared storage and applies on the in-memory namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6513) Add MirrorJournalManager to transfer edits from primary cluster to Mirror cluster

2014-06-11 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-6513:
---

 Summary: Add MirrorJournalManager to transfer edits from primary 
cluster to Mirror cluster
 Key: HDFS-6513
 URL: https://issues.apache.org/jira/browse/HDFS-6513
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Vinayakumar B


There is a need of separate JournalManager implementation to transfer the edits 
to mirror cluster in synchronous mode of data replication to mirror cluster.

This Jira targets the implementation of the MirrorJournalManager, which will be 
used as another shared journal at the primary cluster's Active Namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)