[ 
https://issues.apache.org/jira/browse/ACCUMULO-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14708281#comment-14708281
 ] 

Josh Elser commented on ACCUMULO-3967:
--------------------------------------

Also, just to confirm: the rfile does contain all of the entries. This is 
definitely an issue with bulk load.

{noformat}
% accumulo rfile-info /accumulo/tables/4/b-0000127/I0000128.rf
Reading file: hdfs://localhost:8020/accumulo/tables/4/b-0000127/I0000128.rf
Locality group         : <DEFAULT>
        Start block          : 0
        Num   blocks         : 360
        Index level 0        : 29,497 bytes  1 blocks
        First key            : 00:0002275267839497F4F86F7142D0EC2F 
Family:Qualifier [] 9223372036854775807 false
        Last key             : 23:FFFF7F3618B77EB07A51EE91381CE0BF 
Family:Qualifier [] 9223372036854775807 false
        Num entries          : 1,000,000
        Column families      : [Family]
{noformat}

> bulk import loses records when loading pre-split table
> ------------------------------------------------------
>
>                 Key: ACCUMULO-3967
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3967
>             Project: Accumulo
>          Issue Type: Bug
>          Components: client, tserver
>    Affects Versions: 1.7.0
>         Environment: generic hadoop 2.6.0, zookeeper 3.4.6 on redhat 6.7
> 7 node cluster
>            Reporter: Edward Seidl
>            Priority: Blocker
>             Fix For: 1.7.1, 1.8.0
>
>
> I just noticed that some records I'm loading via importDirectory go missing.  
> After a lot of digging around trying to reproduce the problem, I discovered 
> that it occurs most frequently when loading a table that I have just recently 
> added splits to.  In the tserver logs I'll see messages like 
> 20 16:25:36,805 [client.BulkImporter] INFO : Could not assign 1 map files to 
> tablet 1xw;18;17 because : Not Serving Tablet .  Will retry ...
>  
> or
> 20 16:25:44,826 [tserver.TabletServer] INFO : files 
> [hdfs://xxxx:54310/accumulo/tables/1xw/b-00jnmxe/I00jnmxq.rf] not imported to 
> 1xw;03;02: tablet 1xw;03;02 is closed
> these appear after messages about unloading tablets...it seems that tablets 
> are being redistributed at the same time as the bulk import is occuring.
> Steps to reproduce
> 1) I run a mapreduce job that produces random data in rfiles
> 2) copy the rfiles to an import directory
> 3) create table or deleterows -f
> 4) addsplits
> 5) importdirectory
> I have also performed the above completely within the mapreduce job, with 
> similar results.  The difference with the mapreduce job is that the time 
> between adding splits and the import directory is minutes rather than seconds.
> my current test creates 1000000 records, and after the importdirectory 
> returns a count of rows will be anywhere from ~800000 to 1000000.
> With my original workflow, I found that re-importing the same set of rfiles 
> three times would eventually get all rows loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to