milleruntime opened a new issue #2350:
URL: https://github.com/apache/accumulo/issues/2350
I saw a few errors that I did not expect to see after deleting a table that
had active running external compactions. Here is what gets printed in the
compactor log:
<pre>
2021-11-08T12:20:31,706 [compaction.FileCompactor] ERROR: File does not
exist: /accumulo/tables/2/t-000002w/C0000115.rf_tmp (inode 17654) Holder
DFSClient_NONMAPREDUCE_55427
0523_18 does not have any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3050)
at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.completeFileInternal(FSDirWriteFileOp.java:704)
at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.completeFile(FSDirWriteFileOp.java:690)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3094)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:963)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:639)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
at java.base/java.security.AccessController.doPrivileged(Native
Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)
</pre>
<pre>
org.apache.hadoop.ipc.RemoteException: File does not exist:
/accumulo/tables/2/t-000002w/C0000115.rf_tmp (inode 17654) Holder
DFSClient_NONMAPREDUCE_554270523_18 does not ha
ve any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3050)
at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.completeFileInternal(FSDirWriteFileOp.java:704)
at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.completeFile(FSDirWriteFileOp.java:690)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3094)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:963)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:639)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
at java.base/java.security.AccessController.doPrivileged(Native
Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1508)
~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1405)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:234)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:119)
~[hadoop-client-api-3.3.0.jar:?]
at com.sun.proxy.$Proxy34.complete(Unknown Source) ~[?:?]
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:570)
~[hadoop-client-api-3.3.0.jar:?]
at jdk.internal.reflect.GeneratedMethodAccessor3.invoke(Unknown
Source) ~[?:?]
at
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
~[hadoop-client-api-3.3.0.jar:?]
at com.sun.proxy.$Proxy35.complete(Unknown Source) ~[?:?]
at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:957)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:914)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:897)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:852)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
~[hadoop-client-api-3.3.0.jar:?]
at
org.apache.accumulo.core.file.streams.RateLimitedOutputStream.close(RateLimitedOutputStream.java:54)
~[accumulo-core-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at
org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer.close(BCFile.java:369)
~[accumulo-core-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at
org.apache.accumulo.core.file.rfile.RFile$Writer.close(RFile.java:635)
~[accumulo-core-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at
org.apache.accumulo.server.compaction.FileCompactor.call(FileCompactor.java:236)
~[accumulo-server-base-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.accumulo.compactor.Compactor$6.run(Compactor.java:553)
~[accumulo-compactor-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at java.lang.Thread.run(Thread.java:829) [?:?]
</pre>
<pre>
2021-11-08T12:20:31,706 [threads.AccumuloUncaughtExceptionHandler] ERROR:
Caught an Exception in Thread[Compaction job for tablet TKeyExtent(table:32,
endRow:31 33 33 33 33 33 33 33 33 33 33 33 33 33 33 35, prevEndRow:30 63 63 63
63 63 63 63 63 63 63 63 63 63 63 65),5,main]. Thread is dead.
java.lang.RuntimeException: Compaction failed
at org.apache.accumulo.compactor.Compactor$6.run(Compactor.java:568)
~[accumulo-compactor-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at java.lang.Thread.run(Thread.java:829) [?:?]
</pre>
Right before the errors this is what the same compactor was reporting in its
log:
<pre>
2021-11-08T12:20:30,189 [compactor.Compactor] INFO : Starting up compaction
runnable for job:
TExternalCompactionJob(externalCompactionId:ECID:cd145bea-6d06-45cc-b2a2-5a9575
ab3e42, extent:TKeyExtent(table:32, endRow:31 33 33 33 33 33 33 33 33 33 33
33 33 33 33 35, prevEndRow:30 63 63 63 63 63 63 63 63 63 63 63 63 63 63 65),
files:[InputFile(met
adataFileEntry:hdfs://localhost:8020/accumulo/tables/2/t-000002w/F00000ym.rf,
size:2306065, entries:61204, timestamp:-1),
InputFile(metadataFileEntry:hdfs://localhost:8020/a
ccumulo/tables/2/t-000002w/F00000z8.rf, size:2194819, entries:58318,
timestamp:-1),
InputFile(metadataFileEntry:hdfs://localhost:8020/accumulo/tables/2/t-000002w/F00000zu.rf
, size:2325338, entries:61640, timestamp:-1),
InputFile(metadataFileEntry:hdfs://localhost:8020/accumulo/tables/2/t-000002w/F000010l.rf,
size:2322428, entries:61581, timesta
mp:-1)], iteratorSettings:IteratorConfig(iterators:[]),
outputFile:hdfs://localhost:8020/accumulo/tables/2/t-000002w/C0000115.rf_tmp,
propagateDeletes:true, kind:SYSTEM, use
rCompactionId:0, overrides:{})
2021-11-08T12:20:30,191 [compactor.Compactor] DEBUG: Progress checks will
occur every 1 seconds
2021-11-08T12:20:31,191 [compactor.Compactor] DEBUG: Updating coordinator
with compaction progress: Compaction in progress, read 172032 of 242743 input
entries ( 70.87001 %
), written 172032 entries.
</pre>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]