I can't match those line numbers up exactly. What version are you running?

Regardless, a zero-length RFile is not a valid RFile. It looks like it
is trying to read the meta information from the RFile to initialize
the file reader object.

You will need to copy over empty RFiles to replace the zero length
ones, but there's no indication in the provided information about how
the zero length files appeared. Did you have an HDFS failure or some
other system failure prior to this? Do you have anything in your
tserver logs that shows the file name to indicate how it appeared with
no contents? Perhaps you had a disk failure? Might be worth
investigating just to understand the full situation, but the fix
should just be to copy over the file with an empty valid Accumulo
file.

On Wed, Sep 4, 2019 at 11:54 AM Bulldog20630405
<[email protected]> wrote:
>
>
> minor and major compaction hung with the following error (note the rfiles are 
> zero length).  has anyone seen this before? what is the root cause of this?
> (note: i can copy over empty rfiles to replace the zero length ones; however, 
> trying to know what went wrong):
>
> Some problem opening map file hdfs://namenode/accumulo/tables/9/xyz.rf Cannot 
> seek to negative offset
> java.io.EOFException: Cannot seek to negative offset
> at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java.1459)
> ...
> at org.apache.accumulo.core.file.RFile$Reader.<init>(RFile.java:1149)
> ...
> at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2034)
> at org.apache.accumulo.tserver.tablet.Tablet._majorCompact(Tablet.java:2160)
> ...
> at org.apache.accumulo.fate.util.LoggingrRunnable.run(LoggingRunnable.java:35)
>
>
>

Reply via email to