[ 
https://issues.apache.org/jira/browse/ACCUMULO-683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13410431#comment-13410431
 ] 

Billie Rinaldi edited comment on ACCUMULO-683 at 7/10/12 3:28 PM:
------------------------------------------------------------------

That looks like something we should figure out how to handle better.  The 
!METADATA table has a larger number of replicas (5 by default) because it is 
really important not to lose those files.  To make 1.4.1 run with a max 
replication of 3, you could manually change the parameter 
table.file.replication for the !METADATA table to equal the max replication of 
HDFS.  However, I would strongly urge against making this lower than 5.

We should decide if we want Accumulo to automatically set the !METADATA 
replication to the HDFS max if it is less than five, or if we want it to throw 
an informative error saying that the user should lower the !METADATA 
replication at a greater risk of losing critical data.
                
      was (Author: billie.rinaldi):
    That looks like something we should figure out how to handle better.  The 
!METADATA table has a larger number of replicas (5 by default) because it is 
really important not to lose those files.  To make 1.4.1 run with a max 
replication of 3, you could manually change the parameter 
table.file.replication for the !METADATA table to equal the max replication of 
HDFS.  However, I would strongly urge against making this lower than 5.

We should decide if we want Accumulo to automatically set the !METADATA 
replication to the HDFS max if it is less than three, or if we want it to throw 
an informative error saying that the user should lower the !METADATA 
replication at a greater risk of losing critical data.
                  
> Accumulo ignores HDFS max replication configuration
> ---------------------------------------------------
>
>                 Key: ACCUMULO-683
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-683
>             Project: Accumulo
>          Issue Type: Bug
>          Components: tserver
>    Affects Versions: 1.4.1
>            Reporter: Jim Klucar
>            Assignee: Keith Turner
>            Priority: Minor
>
> I setup a new 1.4.1 instance that was running on top of a Hadoop installation 
> that had the maximum block replications set to 3 and the following error 
> showed up on the monitor page.
> java.io.IOException: failed to create file 
> /accumulo/tables/!0/table_info/F0000001.rf_tmp on client 127.0.0.1. Requested 
> replication 5 exceeds maximum 3 at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1123)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:551) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597) at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
> Tablet server error is:
> 10 10:56:25,408 [tabletserver.MinorCompactor] WARN : MinC failed 
> (java.io.IOException: failed to create file /accumulo/tables/!0/
> table_info/F0000001.rf_tmp on client 127.0.0.1.
> Requested replication 5 exceeds maximum 3
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1123)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:551)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
> ) to create /accumulo/tables/!0/table_info/F0000001.rf_tmp retrying ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to