nyl3532016 commented on pull request #3425:
URL: https://github.com/apache/hbase/pull/3425#issuecomment-872368228


   > Now I have another question. Seems we maintain the compacting files in 
compaction server. Keep compacting files is useful if you want to run multiple 
compactions on the same store. But how do we make sure that we always send the 
compaction request of a store to the same compaction server? If we can not 
guarantee, then keeping compacting files in compaction server is useless...
   
   We keep the region to CS mapping in 
Master(CompactionOffloadManager#selectCompactionServer),Now just use the hash 
code of region start key mod the compactionserver size 
(_hashcode(region.startkey) % compactionServerList.size()_) as index to mapping 
the compaction sever, it works well. maybe later we need an assignment module 
for compaction offload.
   And this mapping is not strictly required, If compactionServer changed from 
CS1 to CS2. Now we select file on CS2 may select the file alreay selected on 
CS1, So one of the compaction job must be failed when report to RS(RS will 
check if select file whether exsit). We need to ensure that the 
cache(compacting files in CompactionServerStorage) will be cleared once 
compaction job failed or crash. So keeping compacting files in compaction 
server will simplify the situation of compactionServer crash.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to