Jim Klucar created ACCUMULO-683:
-----------------------------------

             Summary: Accumulo ignores HDFS max replication configuration
                 Key: ACCUMULO-683
                 URL: https://issues.apache.org/jira/browse/ACCUMULO-683
             Project: Accumulo
          Issue Type: Bug
          Components: tserver
    Affects Versions: 1.4.1
            Reporter: Jim Klucar
            Assignee: Keith Turner
            Priority: Minor


I setup a new 1.4.1 instance that was running on top of a Hadoop installation 
that had the maximum block replications set to 3 and the following error showed 
up on the monitor page.

java.io.IOException: failed to create file 
/accumulo/tables/!0/table_info/F0000001.rf_tmp on client 127.0.0.1. Requested 
replication 5 exceeds maximum 3 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1123)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:551) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597) at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:396) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)

Tablet server error is:

10 10:56:25,408 [tabletserver.MinorCompactor] WARN : MinC failed 
(java.io.IOException: failed to create file /accumulo/tables/!0/
table_info/F0000001.rf_tmp on client 127.0.0.1.
Requested replication 5 exceeds maximum 3
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1220)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1123)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:551)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
) to create /accumulo/tables/!0/table_info/F0000001.rf_tmp retrying ...


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to