[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad
[ https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15171329#comment-15171329 ] sunhaitao commented on HBASE-15291: --- Hi Ted Yu the patch submitted by me has been verfied, it will remove the for the filesystem object for the user who initiates the bulkload > FileSystem not closed in secure bulkLoad > > > Key: HBASE-15291 > URL: https://issues.apache.org/jira/browse/HBASE-15291 > Project: HBase > Issue Type: Bug >Affects Versions: 1.0.2, 0.98.16.1 >Reporter: Yong Zhang >Assignee: Yong Zhang > Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18 > > Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, > HBASE-15291.addendum, patch > > > FileSystem not closed in secure bulkLoad after bulkLoad finish, it will > cause memory used more and more if too many bulkLoad . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15291) FileSystem not closed in secure bulkLoad
[ https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sunhaitao updated HBASE-15291: -- Attachment: patch > FileSystem not closed in secure bulkLoad > > > Key: HBASE-15291 > URL: https://issues.apache.org/jira/browse/HBASE-15291 > Project: HBase > Issue Type: Bug >Reporter: Yong Zhang >Assignee: sunhaitao > Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18 > > Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, patch > > > FileSystem not closed in secure bulkLoad after bulkLoad finish, it will > cause memory used more and more if too many bulkLoad . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-15291) FileSystem not closed in secure bulkLoad
[ https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sunhaitao reassigned HBASE-15291: - Assignee: sunhaitao (was: Yong Zhang) > FileSystem not closed in secure bulkLoad > > > Key: HBASE-15291 > URL: https://issues.apache.org/jira/browse/HBASE-15291 > Project: HBase > Issue Type: Bug >Reporter: Yong Zhang >Assignee: sunhaitao > Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18 > > Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch > > > FileSystem not closed in secure bulkLoad after bulkLoad finish, it will > cause memory used more and more if too many bulkLoad . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad
[ https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15170462#comment-15170462 ] sunhaitao commented on HBASE-15291: --- hi Yong Zhang, this method is executed in ugi.doAs method, which means UserGroupInformation.getCurrentUser().equals(ugi) would always be true. > FileSystem not closed in secure bulkLoad > > > Key: HBASE-15291 > URL: https://issues.apache.org/jira/browse/HBASE-15291 > Project: HBase > Issue Type: Bug >Reporter: Yong Zhang >Assignee: Yong Zhang > Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18 > > Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch > > > FileSystem not closed in secure bulkLoad after bulkLoad finish, it will > cause memory used more and more if too many bulkLoad . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-14396) audit log should record the operation object
[ https://issues.apache.org/jira/browse/HBASE-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sunhaitao resolved HBASE-14396. --- Resolution: Fixed > audit log should record the operation object > - > > Key: HBASE-14396 > URL: https://issues.apache.org/jira/browse/HBASE-14396 > Project: HBase > Issue Type: Bug >Reporter: sunhaitao > > Currently the hbase audit log only records the user and scope,we can't know > which table the user is operating on unless the scope is table. > It would be better to know what's going on if we record the exact table and > column family we are operating on besides the scope. > String logMessage = > "Access " + (result.isAllowed() ? "allowed" : "denied") + " for user " > + (result.getUser() != null ? result.getUser().getShortName() : > "UNKNOWN") > + "; reason: " + result.getReason() + "; remote address: " > + (remoteAddr != null ? remoteAddr : "") + "; request: " + > result.getRequest() > + "; context: " + result.toContextString(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14396) audit log should record the operation object
[ https://issues.apache.org/jira/browse/HBASE-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14740366#comment-14740366 ] sunhaitao commented on HBASE-14396: --- I see it ,thanks > audit log should record the operation object > - > > Key: HBASE-14396 > URL: https://issues.apache.org/jira/browse/HBASE-14396 > Project: HBase > Issue Type: Bug >Reporter: sunhaitao > > Currently the hbase audit log only records the user and scope,we can't know > which table the user is operating on unless the scope is table. > It would be better to know what's going on if we record the exact table and > column family we are operating on besides the scope. > String logMessage = > "Access " + (result.isAllowed() ? "allowed" : "denied") + " for user " > + (result.getUser() != null ? result.getUser().getShortName() : > "UNKNOWN") > + "; reason: " + result.getReason() + "; remote address: " > + (remoteAddr != null ? remoteAddr : "") + "; request: " + > result.getRequest() > + "; context: " + result.toContextString(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-14396) audit log should record the operation object
[ https://issues.apache.org/jira/browse/HBASE-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sunhaitao reopened HBASE-14396: --- > audit log should record the operation object > - > > Key: HBASE-14396 > URL: https://issues.apache.org/jira/browse/HBASE-14396 > Project: HBase > Issue Type: Bug >Reporter: sunhaitao > > Currently the hbase audit log only records the user and scope,we can't know > which table the user is operating on unless the scope is table. > It would be better to know what's going on if we record the exact table and > column family we are operating on besides the scope. > String logMessage = > "Access " + (result.isAllowed() ? "allowed" : "denied") + " for user " > + (result.getUser() != null ? result.getUser().getShortName() : > "UNKNOWN") > + "; reason: " + result.getReason() + "; remote address: " > + (remoteAddr != null ? remoteAddr : "") + "; request: " + > result.getRequest() > + "; context: " + result.toContextString(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14396) audit log should record the operation object
[ https://issues.apache.org/jira/browse/HBASE-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739955#comment-14739955 ] sunhaitao commented on HBASE-14396: --- Yes,in context what it logs is the scope not the operating target. For example:I create a table t1, the scope is the namespace it belongs to not the table. >From the audit log we can't see which table we are creating. > audit log should record the operation object > - > > Key: HBASE-14396 > URL: https://issues.apache.org/jira/browse/HBASE-14396 > Project: HBase > Issue Type: Bug >Reporter: sunhaitao > > Currently the hbase audit log only records the user and scope,we can't know > which table the user is operating on unless the scope is table. > It would be better to know what's going on if we record the exact table and > column family we are operating on besides the scope. > String logMessage = > "Access " + (result.isAllowed() ? "allowed" : "denied") + " for user " > + (result.getUser() != null ? result.getUser().getShortName() : > "UNKNOWN") > + "; reason: " + result.getReason() + "; remote address: " > + (remoteAddr != null ? remoteAddr : "") + "; request: " + > result.getRequest() > + "; context: " + result.toContextString(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14396) audit log should record the operation object
sunhaitao created HBASE-14396: - Summary: audit log should record the operation object Key: HBASE-14396 URL: https://issues.apache.org/jira/browse/HBASE-14396 Project: HBase Issue Type: Bug Reporter: sunhaitao Currently the hbase audit log only records the user and scope,we can't know which table the user is operating on unless the scope is table. It would be better to know what's going on if we record the exact table and column family we are operating on besides the scope. String logMessage = "Access " + (result.isAllowed() ? "allowed" : "denied") + " for user " + (result.getUser() != null ? result.getUser().getShortName() : "UNKNOWN") + "; reason: " + result.getReason() + "; remote address: " + (remoteAddr != null ? remoteAddr : "") + "; request: " + result.getRequest() + "; context: " + result.toContextString(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14170) [HBase Rest] RESTServer is not shutting down if "hbase.rest.port" Address already in use.
[ https://issues.apache.org/jira/browse/HBASE-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14701141#comment-14701141 ] sunhaitao commented on HBASE-14170: --- the hang comes from Jetty server, the jetty server is not handling the port already bound exception > [HBase Rest] RESTServer is not shutting down if "hbase.rest.port" Address > already in use. > - > > Key: HBASE-14170 > URL: https://issues.apache.org/jira/browse/HBASE-14170 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Y. SREENIVASULU REDDY > Fix For: 2.0.0, 1.2.0, 1.0.3 > > > [HBase Rest] RESTServer is not shutting down if "hbase.rest.port" Address > already in use. > If "hbase.rest.port" Address already in use, RESTServer should shutdown, > with out this "hbase.rest.port" we cant perform any operations on > RESTServer. Then there is no use of running RESTServer process. > {code} > 2015-07-30 11:49:48,273 WARN [main] mortbay.log: failed > SelectChannelConnector@0.0.0.0:8080: java.net.BindException: Address already > in use > 2015-07-30 11:49:48,274 WARN [main] mortbay.log: failed Server@563f38c4: > java.net.BindException: Address already in use > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13858) RS/MasterDumpServlet dumps threads before its “Stacks” header
[ https://issues.apache.org/jira/browse/HBASE-13858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sunhaitao updated HBASE-13858: -- Assignee: (was: sunhaitao) > RS/MasterDumpServlet dumps threads before its “Stacks” header > - > > Key: HBASE-13858 > URL: https://issues.apache.org/jira/browse/HBASE-13858 > Project: HBase > Issue Type: Bug > Components: master, regionserver, UI >Affects Versions: 1.1.0 >Reporter: Lars George >Priority: Trivial > Labels: beginner > Fix For: 2.0.0, 1.3.0 > > > The stacktraces are captured using a Hadoop helper method, then its output is > merged with the current. I presume there is a simple flush after outputing > the "Stack" header missing, before then the caught output is dumped. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13858) RS/MasterDumpServlet dumps threads before its “Stacks” header
[ https://issues.apache.org/jira/browse/HBASE-13858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sunhaitao reassigned HBASE-13858: - Assignee: sunhaitao > RS/MasterDumpServlet dumps threads before its “Stacks” header > - > > Key: HBASE-13858 > URL: https://issues.apache.org/jira/browse/HBASE-13858 > Project: HBase > Issue Type: Bug > Components: master, regionserver, UI >Affects Versions: 1.1.0 >Reporter: Lars George >Assignee: sunhaitao >Priority: Trivial > Labels: beginner > Fix For: 2.0.0, 1.3.0 > > > The stacktraces are captured using a Hadoop helper method, then its output is > merged with the current. I presume there is a simple flush after outputing > the "Stack" header missing, before then the caught output is dumped. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool is not deleting the expired mob data.
[ https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14543213#comment-14543213 ] sunhaitao commented on HBASE-13670: --- but i have some apprehension,the customer who uses this mob feature may actually have the use case which the ttl is less than one day,then we can only say sorry we don't support this. for your point 2,set it to timestamp instead of date,which will cut the name length by 18 bytes. can you illustrate the benefit? my understanding is that is only saves some memory, and it will not do much benefit to the performance. In order to make the mob more adaptive to customer's requirement, i prefer to keep it to timestamp > [HBase MOB] ExpiredMobFileCleaner tool is not deleting the expired mob data. > > > Key: HBASE-13670 > URL: https://issues.apache.org/jira/browse/HBASE-13670 > Project: HBase > Issue Type: Bug > Components: mob >Affects Versions: hbase-11339 >Reporter: Y. SREENIVASULU REDDY > Fix For: hbase-11339 > > > ExpiredMobFileCleaner tool is not deleting the expired mob data. > steps to reproduce: > === > 1.Create the table with one column family as mob and set the TTL for mob > columnfamily very less. > {code} > hbase(main):020:0> describe 'mobtab' > Table mobtab is ENABLED > mobtab > COLUMN FAMILIES DESCRIPTION > {NAME => 'mobcf', IS_MOB => 'true',MOB_THRESHOLD => '102400', VERSIONS => > '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => '60 > SECONDS (1 MINUTE)', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BL > OOMFILTER => 'ROW', IN_MEMORY => 'false', COMPRESSION => 'NONE', BLOCKCACHE > => 'true', BLOCKSIZE => '65536'} > {NAME => 'norcf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => > 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', > COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BL > OCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} > 2 row(s) in 0.0650 seconds > {code} > 2. then insert the mob data into the table(mobcf), and normal data into the > another columnFamily(norcf). > 3. flush the table. > 4. scan the table before TTL expire. (able to fetch the data) > 5. scan the table after TTL got expired, as a result mob data should not > display, and mob file should exist in hdfs. > 5. run ExpiredMobFileCleaner tool manually to clean the expired mob data for > TTL expired data. > {code} > ./hbase org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner mobtab mobcf > {code} > {code} > client log_message: > 2015-05-09 18:03:37,731 INFO [main] mob.ExpiredMobFileCleaner: Cleaning the > expired MOB files of mobcf in mobtab > 2015-05-09 18:03:37,734 INFO [main] hfile.CacheConfig: CacheConfig:disabled > 2015-05-09 18:03:37,738 INFO [main] mob.MobUtils: MOB HFiles older than 8 > May 2015 18:30:00 GMT will be deleted! > 2015-05-09 18:03:37,971 DEBUG [main] mob.MobUtils: Checking file > d41d8cd98f00b204e9800998ecf8427e20150509c9108e1a9252418abbfd54323922c518 > 2015-05-09 18:03:37,971 INFO [main] mob.MobUtils: 0 expired mob files are > deleted > 2015-05-09 18:03:37,971 INFO [main] > client.ConnectionManager$HConnectionImplementation: Closing master protocol: > MasterService > {code} > *problem:* > If we run ExpiredMobFileCleaner tool manually, it is not deleting the expired > mob data. For deletion it is considering default time period > "hbase.master.mob.ttl.cleaner.period". > With this Time period "hbase.master.mob.ttl.cleaner.period" only > ExpiredMobFileCleanerChore should consider. > {code} > conf: > > hbase.master.mob.ttl.cleaner.period > 8640 > hbase-default.xml > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool is not deleting the expired mob data.
[ https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14543165#comment-14543165 ] sunhaitao commented on HBASE-13670: --- i agree, +1 on document > [HBase MOB] ExpiredMobFileCleaner tool is not deleting the expired mob data. > > > Key: HBASE-13670 > URL: https://issues.apache.org/jira/browse/HBASE-13670 > Project: HBase > Issue Type: Bug > Components: mob >Affects Versions: hbase-11339 >Reporter: Y. SREENIVASULU REDDY > Fix For: hbase-11339 > > > ExpiredMobFileCleaner tool is not deleting the expired mob data. > steps to reproduce: > === > 1.Create the table with one column family as mob and set the TTL for mob > columnfamily very less. > {code} > hbase(main):020:0> describe 'mobtab' > Table mobtab is ENABLED > mobtab > COLUMN FAMILIES DESCRIPTION > {NAME => 'mobcf', IS_MOB => 'true',MOB_THRESHOLD => '102400', VERSIONS => > '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => '60 > SECONDS (1 MINUTE)', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BL > OOMFILTER => 'ROW', IN_MEMORY => 'false', COMPRESSION => 'NONE', BLOCKCACHE > => 'true', BLOCKSIZE => '65536'} > {NAME => 'norcf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => > 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', > COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BL > OCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} > 2 row(s) in 0.0650 seconds > {code} > 2. then insert the mob data into the table(mobcf), and normal data into the > another columnFamily(norcf). > 3. flush the table. > 4. scan the table before TTL expire. (able to fetch the data) > 5. scan the table after TTL got expired, as a result mob data should not > display, and mob file should exist in hdfs. > 5. run ExpiredMobFileCleaner tool manually to clean the expired mob data for > TTL expired data. > {code} > ./hbase org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner mobtab mobcf > {code} > {code} > client log_message: > 2015-05-09 18:03:37,731 INFO [main] mob.ExpiredMobFileCleaner: Cleaning the > expired MOB files of mobcf in mobtab > 2015-05-09 18:03:37,734 INFO [main] hfile.CacheConfig: CacheConfig:disabled > 2015-05-09 18:03:37,738 INFO [main] mob.MobUtils: MOB HFiles older than 8 > May 2015 18:30:00 GMT will be deleted! > 2015-05-09 18:03:37,971 DEBUG [main] mob.MobUtils: Checking file > d41d8cd98f00b204e9800998ecf8427e20150509c9108e1a9252418abbfd54323922c518 > 2015-05-09 18:03:37,971 INFO [main] mob.MobUtils: 0 expired mob files are > deleted > 2015-05-09 18:03:37,971 INFO [main] > client.ConnectionManager$HConnectionImplementation: Closing master protocol: > MasterService > {code} > *problem:* > If we run ExpiredMobFileCleaner tool manually, it is not deleting the expired > mob data. For deletion it is considering default time period > "hbase.master.mob.ttl.cleaner.period". > With this Time period "hbase.master.mob.ttl.cleaner.period" only > ExpiredMobFileCleanerChore should consider. > {code} > conf: > > hbase.master.mob.ttl.cleaner.period > 8640 > hbase-default.xml > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool is not deleting the expired mob data.
[ https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539375#comment-14539375 ] sunhaitao commented on HBASE-13670: --- Hi, that's not the case. The reason is that the date of the mobfile is only accurate to date. So, you set the hbase.master.mob.ttl.cleaner.period to 2 days,today you delte a mob column family, tommorrow you execute the tool, then it can be executed successfully. The below code may give you some inspiration: In MobFileName: {code} public static MobFileName create(String fileName) { // The format of a file name is md5HexString(0-31bytes) + date(32-39bytes) + UUID // The date format is MMdd String startKey = fileName.substring(0, 32); String date = fileName.substring(32, 40); String uuid = fileName.substring(40); return new MobFileName(startKey, date, uuid); } {code} this is the sample for mob file name: {code} /hbase1/mobdir/data/default/mobcleantest/8986aed7a2762c273c875dfcaa29cfaa/mobcf/d41d8cd98f00b204e9800998ecf8427e2015051198809216cbbb41bd83432d9142f32ed2 {code} 32-40:20150511 it is onnly accurate to date in MobUtils.java {code} if (fileDate.getTime() < expireDate.getTime()) { if (LOG.isDebugEnabled()) { LOG.debug(fileName + " is an expired file"); } filesToClean.add(new StoreFile(fs, file.getPath(), conf, cacheConfig, BloomType.NONE)); } {code} because the time is only accurate to date,it is impossbile for you to delete it within the same day. > [HBase MOB] ExpiredMobFileCleaner tool is not deleting the expired mob data. > > > Key: HBASE-13670 > URL: https://issues.apache.org/jira/browse/HBASE-13670 > Project: HBase > Issue Type: Bug > Components: mob >Affects Versions: hbase-11339 >Reporter: Y. SREENIVASULU REDDY > Fix For: hbase-11339 > > > ExpiredMobFileCleaner tool is not deleting the expired mob data. > steps to reproduce: > === > 1.Create the table with one column family as mob and set the TTL for mob > columnfamily very less. > {code} > hbase(main):020:0> describe 'mobtab' > Table mobtab is ENABLED > mobtab > COLUMN FAMILIES DESCRIPTION > {NAME => 'mobcf', IS_MOB => 'true',MOB_THRESHOLD => '102400', VERSIONS => > '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => '60 > SECONDS (1 MINUTE)', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BL > OOMFILTER => 'ROW', IN_MEMORY => 'false', COMPRESSION => 'NONE', BLOCKCACHE > => 'true', BLOCKSIZE => '65536'} > {NAME => 'norcf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => > 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', > COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BL > OCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} > 2 row(s) in 0.0650 seconds > {code} > 2. then insert the mob data into the table(mobcf), and normal data into the > another columnFamily(norcf). > 3. flush the table. > 4. scan the table before TTL expire. (able to fetch the data) > 5. scan the table after TTL got expired, as a result mob data should not > display, and mob file should exist in hdfs. > 5. run ExpiredMobFileCleaner tool manually to clean the expired mob data for > TTL expired data. > {code} > ./hbase org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner mobtab mobcf > {code} > {code} > client log_message: > 2015-05-09 18:03:37,731 INFO [main] mob.ExpiredMobFileCleaner: Cleaning the > expired MOB files of mobcf in mobtab > 2015-05-09 18:03:37,734 INFO [main] hfile.CacheConfig: CacheConfig:disabled > 2015-05-09 18:03:37,738 INFO [main] mob.MobUtils: MOB HFiles older than 8 > May 2015 18:30:00 GMT will be deleted! > 2015-05-09 18:03:37,971 DEBUG [main] mob.MobUtils: Checking file > d41d8cd98f00b204e9800998ecf8427e20150509c9108e1a9252418abbfd54323922c518 > 2015-05-09 18:03:37,971 INFO [main] mob.MobUtils: 0 expired mob files are > deleted > 2015-05-09 18:03:37,971 INFO [main] > client.ConnectionManager$HConnectionImplementation: Closing master protocol: > MasterService > {code} > *problem:* > If we run ExpiredMobFileCleaner tool manually, it is not deleting the expired > mob data. For deletion it is considering default time period > "hbase.master.mob.ttl.cleaner.period". > With this Time period "hbase.master.mob.ttl.cleaner.period" only > ExpiredMobFileCleanerChore should consider. > {code} > conf: > > hbase.master.mob.ttl.cleaner.period > 8640 > hbase-default.xml > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13153) enable bulkload to support replication
sunhaitao created HBASE-13153: - Summary: enable bulkload to support replication Key: HBASE-13153 URL: https://issues.apache.org/jira/browse/HBASE-13153 Project: HBase Issue Type: Bug Components: API Reporter: sunhaitao Currently we plan to use HBase Replication feature to deal with disaster tolerance scenario.But we encounter an issue that we will use bulkload very frequently,because bulkload bypass write path, and will not generate WAL, so the data will not be replicated to backup cluster. It's inappropriate to bukload twice both on active cluster and backup cluster. So i advise do some modification to bulkload feature to enable bukload to both active cluster and backup cluster -- This message was sent by Atlassian JIRA (v6.3.4#6332)