[jira] [Commented] (HBASE-10700) IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin
[ https://issues.apache.org/jira/browse/HBASE-10700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924772#comment-13924772 ] Hudson commented on HBASE-10700: SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #198 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/198/]) HBASE-10700 IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin (tedyu: rev 1575485) * /hbase/branches/0.98/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestWithCellVisibilityLoadAndVerify.java IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin Key: HBASE-10700 URL: https://issues.apache.org/jira/browse/HBASE-10700 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.1, 0.99.0 Attachments: 10700-v1.txt, 10700-v2.txt When we ran IntegrationTestWithCellVisibilityLoadAndVerify on secure cluster, we observed: {code} 2014-03-06 05:00:29,210|beaver.machine|INFO|2014-03-06 05:00:29,209 WARN [main] security.UserGroupInformation: PriviledgedActionException as:admin (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2014-03-06 05:00:29,211|beaver.machine|INFO|2014-03-06 05:00:29,211 WARN [main] ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2014-03-06 05:00:29,211|beaver.machine|INFO|2014-03-06 05:00:29,211 WARN [main] security.UserGroupInformation: PriviledgedActionException as:admin (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2014-03-06 05:00:29,214|beaver.machine|INFO|2014-03-06 05:00:29,214 WARN [main] security.UserGroupInformation: PriviledgedActionException as:admin (auth:SIMPLE) cause:java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: 17n27.net/206.190.52.46; destination host is: 17n26.net:8020; {code} This was due to the test using admin as super user but this user wasn't setup on secure cluster. user hbase has been setup and was used to run the test. The test should use current user to be the admin, if it is different from admin. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10690) Drop Hadoop-1 support
[ https://issues.apache.org/jira/browse/HBASE-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924777#comment-13924777 ] Andrew Purtell commented on HBASE-10690: It would be great to release just one artifact again. I'd like to see us build each release against the latest Hadoop release available during the commit train up to the release point. We should not hesitate to use new HDFS performance features as they come in, and use reflection/compat modules/Maven profiles that give users or vendors backwards compatible build options against a few earlier versions. Drop Hadoop-1 support - Key: HBASE-10690 URL: https://issues.apache.org/jira/browse/HBASE-10690 Project: HBase Issue Type: Improvement Reporter: Enis Soztutar Priority: Critical Fix For: 0.99.0 As per thread: http://mail-archives.apache.org/mod_mbox/hbase-dev/201403.mbox/%3ccamuu0w93mgp7zbbxgccov+be3etmkvn5atzowvzqd_gegdk...@mail.gmail.com%3E It seems that the consensus is that supporting Hadoop-1 in HBase-1.x will be costly, so we should drop the support. In this issue: - We'll document that Hadoop-1 support is deprecated in HBase-0.98. And users should switch to hadoop-2.2+ anyway. - Document that upcoming HBase-0.99 and HBase-1.0 releases will not have Hadoop-1 support. - Document that there is no rolling upgrade support for going between Hadoop-1 and Hadoop-2 (using HBase-0.96 or 0.98). - Release artifacts won't contain HBase build with Hadoop-1. - We may keep the profile, jenkins job etc if we want. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication
[ https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924782#comment-13924782 ] Feng Honghua commented on HBASE-10651: -- Ping for review, thanks :-) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication -- Key: HBASE-10651 URL: https://issues.apache.org/jira/browse/HBASE-10651 Project: HBase Issue Type: Sub-task Components: regionserver, Replication Reporter: Feng Honghua Assignee: Feng Honghua Attachments: HBASE-10651-trunk_v1.patch, HBASE-10651-trunk_v2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10595) HBaseAdmin.getTableDescriptor can wrongly get the previous table's TableDescriptor even after the table dir in hdfs is removed
[ https://issues.apache.org/jira/browse/HBASE-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924783#comment-13924783 ] Feng Honghua commented on HBASE-10595: -- Ping for review or further comment, [~enis] / [~v.himanshu] ? Thanks. :-) HBaseAdmin.getTableDescriptor can wrongly get the previous table's TableDescriptor even after the table dir in hdfs is removed -- Key: HBASE-10595 URL: https://issues.apache.org/jira/browse/HBASE-10595 Project: HBase Issue Type: Sub-task Components: master, util Reporter: Feng Honghua Assignee: Feng Honghua Attachments: HBASE-10595-trunk_v1.patch, HBASE-10595-trunk_v2.patch, HBASE-10595-trunk_v3.patch, HBASE-10595-trunk_v4.patch When a table dir (in hdfs) is removed(by outside), HMaster will still return the cached TableDescriptor to client for getTableDescriptor request. On the contrary, HBaseAdmin.listTables() is handled correctly in current implementation, for a table whose table dir in hdfs is removed by outside, getTableDescriptor can still retrieve back a valid (old) table descriptor, while listTables says it doesn't exist, this is inconsistent The reason for this bug is because HMaster (via FSTableDescriptors) doesn't check if the table dir exists for getTableDescriptor() request, (while it lists all existing table dirs(not firstly respects cache) and returns accordingly for listTables() request) When a table is deleted via deleteTable, the cache will be cleared after the table dir and tableInfo file is removed, listTables/getTableDescriptor inconsistency should be transient(though still exists, when table dir is removed while cache is not cleared) and harder to expose -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10662) RegionScanner is never closed if the region has been moved-out or re-opened when performing scan request
[ https://issues.apache.org/jira/browse/HBASE-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Feng Honghua updated HBASE-10662: - Attachment: HBASE-10662-0.96_v1.patch HBASE-10662-0.94_v1.patch [~lhofhansl] : Patches for 0.94 and 0.96 attached. Tests passed in my local run RegionScanner is never closed if the region has been moved-out or re-opened when performing scan request Key: HBASE-10662 URL: https://issues.apache.org/jira/browse/HBASE-10662 Project: HBase Issue Type: Bug Components: regionserver Reporter: Feng Honghua Assignee: Feng Honghua Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10662-0.94_v1.patch, HBASE-10662-0.96_v1.patch, HBASE-10662-trunk_v1.patch During regionserver processes scan request from client, it fails the request by throwing a wrapped NotServingRegionException to client if it finds the region related to the passed-in scanner-id has been re-opened, and it also removes the RegionScannerHolder from the scanners. In fact under this case, the old and invalid RegionScanner related to the passed-in scanner-id should be closed and the related lease should be cancelled at the mean time as well. Currently region's related scanners aren't closed when closing the region, a region scanner is closed only when requested explicitly by client, or by expiration of the related lease, in this sense the close of region scanners is quite passive and lag. When regionserver processes scan request from client and can't find online region corresponding to the passed-in scanner-id (due to being moved out) or find the region has been re-opened, it throws NotServingRegionException and removes the corresponding RegionScannerHolder from scanners without closing the related region scanner (nor cancelling the related lease), but when the lease expires, the related region scanner still doesn't be closed since it doesn't present in scanners now. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10702) HBase fails to respect Deletes
[ https://issues.apache.org/jira/browse/HBASE-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924826#comment-13924826 ] Jean-Marc Spaggiari commented on HBASE-10702: - Again, make sense ;) HBase fails to respect Deletes -- Key: HBASE-10702 URL: https://issues.apache.org/jira/browse/HBASE-10702 Project: HBase Issue Type: Bug Affects Versions: 0.94.2, 0.94.15, 0.94.17 Reporter: Jean-Marc Spaggiari Priority: Critical One of our user contacted me about an issue with Deletes. Some of the deletes they do are not totally processed. Therefore, after the Delete, if they do a Get, from time to time, the Get return the row when it should have been deleted and should have returned nothing. After multiple Deletes, the row is finally deleted. If we don't retry after the 1st attempt, the row stays there. Even after a flush, a major_compact, etc. I have been able to reproduce the issue in 0.94.2 (CDH4.2.0 EC2), 0.94.15(CDH4.6.0 EC2) and 0.94.17 (Apache version bare metal) Here is a simple output from my test app. 1736509 Doing a delete for 099676 failed. Start to count puts=311 deletes=64 retries=2 2281712 Doing a delete for 027606 failed. Start to count puts=3679 deletes=247 retries=2 2388305 Doing a delete for 018306 failed. Start to count puts=4744 deletes=290 retries=2 2532943 Doing a delete for 030446 failed. Start to count puts=5678 deletes=337 retries=2 2551421 Doing a delete for 046304 failed. Start to count puts=5845 deletes=345 retries=2 2561099 Doing a delete for 019619 failed. Start to count puts=5869 deletes=347 retries=3 First field is the time in ms since the test started. So first error occurs after about 30 minutes. Below are the number of puts and deletes done, and the numbers of required retries to get the value deleted. Key is random number between 00 and 10. Very simple test. Just doing more puts than deletes. Tests are running on 0.96.1.1 for almost 1h now so it seems to be fine, but it's not on the same cluster, so I will keep that running for hours/days first. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8604) improve reporting of incorrect peer address in replication
[ https://issues.apache.org/jira/browse/HBASE-8604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924836#comment-13924836 ] Hudson commented on HBASE-8604: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #229 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/229/]) HBASE-8604 improve reporting of incorrect peer address in replication (stack: rev 1575461) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java improve reporting of incorrect peer address in replication -- Key: HBASE-8604 URL: https://issues.apache.org/jira/browse/HBASE-8604 Project: HBase Issue Type: Improvement Components: Replication Affects Versions: 0.94.6 Reporter: Roman Shaposhnik Assignee: Rekha Joshi Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.19 Attachments: HBASE-8604.1.patch I was running some replication code that incorrectly advertised the peer address for replication. HBase complained that the format of the record was NOT what it was expecting but it didn't include what it saw in the exception message. Including that string would help cutting down the time it takes to debug issues like that. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-8304: -- Attachment: 8304-v4.patch Patch v4 addresses Andrew's comment. Also corrected some spelling mistakes. Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port. --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924925#comment-13924925 ] haosdent commented on HBASE-8304: - Thank you very much for receive so much useful advice. [~jerryhe] {quote} source uri: webhdfs://myhost1:14000/ {quote} Because of getCanonicalServiceName, I think a DistributedFileSystem would return hdfs://*** or ha-hdfs://***. {quote} //For example, srcFs is ha-hdfs://nameservices and desFs is hdfs://activeNamenode:port If the desFs is HA enabled, then you will get the ''ha-hdfs:// format, right? If it returns hdfs://, does it already tell you they are different FS? {quote} Because sometimes user maybe use hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles hdfs://activeNamenode:port/family to complete bulkload, we would get a desFs which canonicalServiceName is hdfs://activeNamenode:port here. Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port. --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924926#comment-13924926 ] haosdent commented on HBASE-8304: - [~andrew.purt...@gmail.com] {quote} use different IP addresses in the test here {quote} 127.0.1.1 and 127.0.0.1 are different here, maybe I use confused ips here. Thanks for the update from [~te...@apache.org]. Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port. --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-8076) add better doc for HBaseAdmin#offline API.
[ https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-8076: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) committed to 0.96,0.98 and trunk. Thanks for review Stack. add better doc for HBaseAdmin#offline API. -- Key: HBASE-8076 URL: https://issues.apache.org/jira/browse/HBASE-8076 Project: HBase Issue Type: Improvement Components: Admin Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-8076.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924953#comment-13924953 ] Hadoop QA commented on HBASE-8304: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12633552/8304-v4.patch against trunk revision . ATTACHMENT ID: 12633552 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8931//console This message is automatically generated. Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port. --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-8304: -- Summary: Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port (was: Bulkload fail to remove files if fs.default.name / fs.defaultFS is configured without default port.) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-8304: -- Fix Version/s: 0.99.0 0.98.1 Hadoop Flags: Reviewed [~haosd...@gmail.com]: Mind attaching patch for 0.96 and 0.94 ? Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Fix For: 0.98.1, 0.99.0 Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication
[ https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10651: -- Resolution: Fixed Fix Version/s: 0.99.0 0.98.1 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk and to 0.98 (operability/liveness). Thanks for the patch Honghua. It doesn't apply to 0.96. It seems to do this stuff a little differently so just let it pass. Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication -- Key: HBASE-10651 URL: https://issues.apache.org/jira/browse/HBASE-10651 Project: HBase Issue Type: Sub-task Components: regionserver, Replication Reporter: Feng Honghua Assignee: Feng Honghua Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10651-trunk_v1.patch, HBASE-10651-trunk_v2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8076) add better doc for HBaseAdmin#offline API.
[ https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924999#comment-13924999 ] Hudson commented on HBASE-8076: --- FAILURE: Integrated in HBase-TRUNK #4992 (See [https://builds.apache.org/job/HBase-TRUNK/4992/]) HBASE-8076 add better doc for HBaseAdmin#offline API.(Rajesh) (rajeshbabu: rev 1575580) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java add better doc for HBaseAdmin#offline API. -- Key: HBASE-8076 URL: https://issues.apache.org/jira/browse/HBASE-8076 Project: HBase Issue Type: Improvement Components: Admin Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-8076.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10648) Pluggable Memstore
[ https://issues.apache.org/jira/browse/HBASE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925003#comment-13925003 ] stack commented on HBASE-10648: --- All of the above is good by me Anoop including the bit about more than one snapshot. Yeah, current implementation only allows one. I was thinking of an implementation that might carry N ... but this fact can be contained inside the actual implementation. It does not need to escape. In fact, it could be argued that this 'snapshotting' is an implementation detail of the default memstore and it should really be more generic... prepareForFlush instead of snapshot but we can do this in later issues. Important thing is make what we have is pluggable. When you start to do the second implementation, you will have a better idea of what changes are needed in the Interface. Pluggable Memstore -- Key: HBASE-10648 URL: https://issues.apache.org/jira/browse/HBASE-10648 Project: HBase Issue Type: Sub-task Reporter: Anoop Sam John Assignee: Anoop Sam John Attachments: HBASE-10648.patch, HBASE-10648_V2.patch, HBASE-10648_V3.patch Make Memstore into an interface implementation. Also make it pluggable by configuring the FQCN of the impl. This will allow us to have different impl and optimizations in the Memstore DataStructure and the upper layers untouched. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10679) Both clients get wrong scan results if the first scanner expires and the second scanner is created with the same scannerId on the same region
[ https://issues.apache.org/jira/browse/HBASE-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925006#comment-13925006 ] stack commented on HBASE-10679: --- Point taken on item #2 [~fenghh] Both clients get wrong scan results if the first scanner expires and the second scanner is created with the same scannerId on the same region - Key: HBASE-10679 URL: https://issues.apache.org/jira/browse/HBASE-10679 Project: HBase Issue Type: Bug Components: regionserver Reporter: Feng Honghua Assignee: Feng Honghua Priority: Critical Attachments: HBASE-10679-trunk_v1.patch, HBASE-10679-trunk_v2.patch, HBASE-10679-trunk_v2.patch The scenario is as below (both Client A and Client B scan against Region R) # A opens a scanner SA on R, the scannerId is N, it successfully get its first row a # SA's lease expires and it's removed from scanners # B opens a scanner SB on R, the scannerId is N too. it successfully get its first row m # A issues its second scan request with scannerId N, regionserver finds N is valid scannerId and the region matches too. (since the region is always online on this regionserver and both two scanners are against it), so it executes scan request on SB, returns n to A -- wrong! (get data from other scanner, A expects row something like b that follows a) # B issues its second scan request with scannerId N, regionserver also thinks it's valid, and executes scan on SB, return o to B -- wrong! (should return n but n has been scanned out by A just now) The consequence is both clients get wrong scan results: # A gets data from scanner created by other client, its own scanner has expired and removed # B misses data which should be gotten but has been wrongly scanned out by A The root cause is scannerId generated by regionserver can't be guaranteed unique within regionserver's whole lifecycle, *there is only guarantee that scannerIds of scanners that are currently still valid (not expired) are unique*, so a same scannerId can present in scanners again after a former scanner with this scannerId expires and has been removed from scanners. And if the second scanner is against the same region, the bug arises. Theoretically, the possibility of above scenario should be very rare(two consecutive scans on a same region from two different clients get a same scannerId, and the first expires before the second is created), but it does can happen, and once it happens, the consequence is severe(all clients involved get wrong data), and should be extremely hard to diagnose/debug -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8076) add better doc for HBaseAdmin#offline API.
[ https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925012#comment-13925012 ] Hudson commented on HBASE-8076: --- FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #199 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/199/]) HBASE-8076 add better doc for HBaseAdmin#offline API.(Rajesh) (rajeshbabu: rev 1575581) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java add better doc for HBaseAdmin#offline API. -- Key: HBASE-8076 URL: https://issues.apache.org/jira/browse/HBASE-8076 Project: HBase Issue Type: Improvement Components: Admin Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-8076.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication
[ https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925014#comment-13925014 ] Hudson commented on HBASE-10651: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #200 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/200/]) HBASE-10651 Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication (Honghua Feng) (stack: rev 1575606) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication -- Key: HBASE-10651 URL: https://issues.apache.org/jira/browse/HBASE-10651 Project: HBase Issue Type: Sub-task Components: regionserver, Replication Reporter: Feng Honghua Assignee: Feng Honghua Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10651-trunk_v1.patch, HBASE-10651-trunk_v2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8076) add better doc for HBaseAdmin#offline API.
[ https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925015#comment-13925015 ] Hudson commented on HBASE-8076: --- SUCCESS: Integrated in hbase-0.96 #334 (See [https://builds.apache.org/job/hbase-0.96/334/]) HBASE-8076 add better doc for HBaseAdmin#offline API.(Rajesh) (rajeshbabu: rev 1575582) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java add better doc for HBaseAdmin#offline API. -- Key: HBASE-8076 URL: https://issues.apache.org/jira/browse/HBASE-8076 Project: HBase Issue Type: Improvement Components: Admin Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-8076.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925013#comment-13925013 ] Hudson commented on HBASE-8304: --- FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #200 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/200/]) HBASE-8304 Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port (Haosdent) (tedyu: rev 1575588) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Fix For: 0.98.1, 0.99.0 Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10514) Forward port HBASE-10466, possible data loss when failed flushes
[ https://issues.apache.org/jira/browse/HBASE-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925041#comment-13925041 ] stack commented on HBASE-10514: --- My added tests are not closing the WAL (and making zombies). Even fixing that, there are some stragglers still. Looking. Forward port HBASE-10466, possible data loss when failed flushes Key: HBASE-10514 URL: https://issues.apache.org/jira/browse/HBASE-10514 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18 Attachments: 10514.txt, 10514v2.txt, 10514v3.txt, 10514v3.txt Critical data loss issues that we need to ensure are not in branches beyond 0.89fb. Assigning myself. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10642) Add M/R over snapshots sample code to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925045#comment-13925045 ] Lars Hofhansl commented on HBASE-10642: --- [~stack], do you want me to backport the large (and complete) patch from HBASE-8369 in 0.96, rather forward port the more minimal patch I have here for 0.94? The HBASE-8369 patch is pretty invasive. For 0.94 I definitely prefer the smaller patch. 0.96, I can see going either way. I'll make sure the M/R APIs are the same either way. [~ndimiduk], even 0.98/trunk do not have a mapred version. We can file a separate issue to add mapred support to all branches (or not, depending on discussion) for HIve support. Add M/R over snapshots sample code to 0.94 -- Key: HBASE-10642 URL: https://issues.apache.org/jira/browse/HBASE-10642 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.94.18 Attachments: 10642-0.94-v2.txt, 10642-0.94.txt, SnapshotInputFormat.java I think we want drive towards all (or most) M/R over HBase to be against snapshots and HDFS directly. Adopting a simple input format (even if just as a sample) as part of HBase will allow us to direct users this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-8304: -- Status: Open (was: Patch Available) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Fix For: 0.98.1, 0.99.0 Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication
[ https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925048#comment-13925048 ] Hudson commented on HBASE-10651: SUCCESS: Integrated in HBase-TRUNK #4993 (See [https://builds.apache.org/job/HBase-TRUNK/4993/]) HBASE-10651 Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication (Honghua Feng) (stack: rev 1575605) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication -- Key: HBASE-10651 URL: https://issues.apache.org/jira/browse/HBASE-10651 Project: HBase Issue Type: Sub-task Components: regionserver, Replication Reporter: Feng Honghua Assignee: Feng Honghua Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10651-trunk_v1.patch, HBASE-10651-trunk_v2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925047#comment-13925047 ] Hudson commented on HBASE-8304: --- SUCCESS: Integrated in HBase-TRUNK #4993 (See [https://builds.apache.org/job/HBase-TRUNK/4993/]) HBASE-8304 Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port (Haosdent) (tedyu: rev 1575590) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Fix For: 0.98.1, 0.99.0 Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10706) Disable writeToWal in tests where possible
[ https://issues.apache.org/jira/browse/HBASE-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-10706: -- Status: Patch Available (was: Open) Disable writeToWal in tests where possible -- Key: HBASE-10706 URL: https://issues.apache.org/jira/browse/HBASE-10706 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Attachments: 10706-trunk-v1.txt See discussion in HBASE-10665. We should disable writeToAll in all tests except for those that test WAL specific stuff, in order to speed up the test suite. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10706) Disable writeToWal in tests where possible
[ https://issues.apache.org/jira/browse/HBASE-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-10706: -- Attachment: 10706-trunk-v1.txt Here's a trunk version. Affected methods: * HBaseTestCase.addContent... Looks like none of the callers need to WAL * HBaseTestingUtility.loadRegion... None of the user callers the WAL * HBaseTestingUtility.loadTable... Some callers need the WAL here. Looked through each case. I'll wait for a test run, if passing I'll remove further unused methods. Disable writeToWal in tests where possible -- Key: HBASE-10706 URL: https://issues.apache.org/jira/browse/HBASE-10706 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Attachments: 10706-trunk-v1.txt See discussion in HBASE-10665. We should disable writeToAll in all tests except for those that test WAL specific stuff, in order to speed up the test suite. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HBASE-10642) Add M/R over snapshots sample code to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925045#comment-13925045 ] Lars Hofhansl edited comment on HBASE-10642 at 3/8/14 11:45 PM: [~stack], do you want me to backport the large (and complete) patch from HBASE-8369 into 0.96, or rather forward port the more minimal patch I have here for 0.94? The HBASE-8369 patch is pretty invasive. For 0.94 I definitely prefer the smaller patch. 0.96, I can see going either way. I'll make sure the M/R APIs are the same either way. [~ndimiduk], even 0.98/trunk do not have a mapred version. We can file a separate issue to add mapred support to all branches (or not, depending on discussion) for HIve support. was (Author: lhofhansl): [~stack], do you want me to backport the large (and complete) patch from HBASE-8369 in 0.96, rather forward port the more minimal patch I have here for 0.94? The HBASE-8369 patch is pretty invasive. For 0.94 I definitely prefer the smaller patch. 0.96, I can see going either way. I'll make sure the M/R APIs are the same either way. [~ndimiduk], even 0.98/trunk do not have a mapred version. We can file a separate issue to add mapred support to all branches (or not, depending on discussion) for HIve support. Add M/R over snapshots sample code to 0.94 -- Key: HBASE-10642 URL: https://issues.apache.org/jira/browse/HBASE-10642 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.94.18 Attachments: 10642-0.94-v2.txt, 10642-0.94.txt, SnapshotInputFormat.java I think we want drive towards all (or most) M/R over HBase to be against snapshots and HDFS directly. Adopting a simple input format (even if just as a sample) as part of HBase will allow us to direct users this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10707) Backport parent issue to 0.96
stack created HBASE-10707: - Summary: Backport parent issue to 0.96 Key: HBASE-10707 URL: https://issues.apache.org/jira/browse/HBASE-10707 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10707) Backport parent issue to 0.96
[ https://issues.apache.org/jira/browse/HBASE-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-10707: -- Attachment: 10707.txt Minor diffs from parent project patch. Backport parent issue to 0.96 - Key: HBASE-10707 URL: https://issues.apache.org/jira/browse/HBASE-10707 Project: HBase Issue Type: Sub-task Components: mapreduce, snapshots Reporter: stack Assignee: stack Fix For: 0.98.0 Attachments: 10707.txt -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10707) Backport parent issue to 0.96
[ https://issues.apache.org/jira/browse/HBASE-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925063#comment-13925063 ] Lars Hofhansl commented on HBASE-10707: --- Hey I was gonna do that in HBASE-10642 :) +1 on patch. Assuming it compiles and the various tests pass. Backport parent issue to 0.96 - Key: HBASE-10707 URL: https://issues.apache.org/jira/browse/HBASE-10707 Project: HBase Issue Type: Sub-task Components: mapreduce, snapshots Reporter: stack Assignee: stack Fix For: 0.98.0 Attachments: 10707.txt -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8076) add better doc for HBaseAdmin#offline API.
[ https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925066#comment-13925066 ] Hudson commented on HBASE-8076: --- FAILURE: Integrated in hbase-0.96-hadoop2 #230 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/230/]) HBASE-8076 add better doc for HBaseAdmin#offline API.(Rajesh) (rajeshbabu: rev 1575582) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java add better doc for HBaseAdmin#offline API. -- Key: HBASE-8076 URL: https://issues.apache.org/jira/browse/HBASE-8076 Project: HBase Issue Type: Improvement Components: Admin Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-8076.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10660) MR over snapshots can OOM when alternative blockcache is enabled
[ https://issues.apache.org/jira/browse/HBASE-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925072#comment-13925072 ] Hudson commented on HBASE-10660: FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-10660 MR over snapshots can OOM when alternative blockcache is enabled (ndimiduk: rev 1575454) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java MR over snapshots can OOM when alternative blockcache is enabled Key: HBASE-10660 URL: https://issues.apache.org/jira/browse/HBASE-10660 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.98.0, 0.99.0 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10660.00.patch, HBASE-10660.01.patch, HBASE-10660.02.patch, HBASE-10660.03.patch Running {{IntegrationTestTableSnapshotInputFormat}} with the {{BucketCache}} enabled results in OOM. The map task is running a sequential scan over the region region it opened, so probably it's not benefiting much from a blockcache. Just disable blockcache entirely for these scans because it's likely the cache config detected is designed for a RS running on a different class of hardware than that running the map task. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8076) add better doc for HBaseAdmin#offline API.
[ https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925069#comment-13925069 ] Hudson commented on HBASE-8076: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-8076 add better doc for HBaseAdmin#offline API.(Rajesh) (rajeshbabu: rev 1575580) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java add better doc for HBaseAdmin#offline API. -- Key: HBASE-8076 URL: https://issues.apache.org/jira/browse/HBASE-8076 Project: HBase Issue Type: Improvement Components: Admin Reporter: rajeshbabu Assignee: rajeshbabu Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-8076.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925068#comment-13925068 ] Hudson commented on HBASE-8304: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-8304 Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port (Haosdent) (tedyu: rev 1575590) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Fix For: 0.98.1, 0.99.0 Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8604) improve reporting of incorrect peer address in replication
[ https://issues.apache.org/jira/browse/HBASE-8604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925074#comment-13925074 ] Hudson commented on HBASE-8604: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-8604 improve reporting of incorrect peer address in replication (stack: rev 1575459) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java improve reporting of incorrect peer address in replication -- Key: HBASE-8604 URL: https://issues.apache.org/jira/browse/HBASE-8604 Project: HBase Issue Type: Improvement Components: Replication Affects Versions: 0.94.6 Reporter: Roman Shaposhnik Assignee: Rekha Joshi Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.19 Attachments: HBASE-8604.1.patch I was running some replication code that incorrectly advertised the peer address for replication. HBase complained that the format of the record was NOT what it was expecting but it didn't include what it saw in the exception message. Including that string would help cutting down the time it takes to debug issues like that. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10651) Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication
[ https://issues.apache.org/jira/browse/HBASE-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925073#comment-13925073 ] Hudson commented on HBASE-10651: FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-10651 Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication (Honghua Feng) (stack: rev 1575605) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication -- Key: HBASE-10651 URL: https://issues.apache.org/jira/browse/HBASE-10651 Project: HBase Issue Type: Sub-task Components: regionserver, Replication Reporter: Feng Honghua Assignee: Feng Honghua Fix For: 0.98.1, 0.99.0 Attachments: HBASE-10651-trunk_v1.patch, HBASE-10651-trunk_v2.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10700) IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin
[ https://issues.apache.org/jira/browse/HBASE-10700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925071#comment-13925071 ] Hudson commented on HBASE-10700: FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-10700 IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin (tedyu: rev 1575486) * /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestWithCellVisibilityLoadAndVerify.java IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin Key: HBASE-10700 URL: https://issues.apache.org/jira/browse/HBASE-10700 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.1, 0.99.0 Attachments: 10700-v1.txt, 10700-v2.txt When we ran IntegrationTestWithCellVisibilityLoadAndVerify on secure cluster, we observed: {code} 2014-03-06 05:00:29,210|beaver.machine|INFO|2014-03-06 05:00:29,209 WARN [main] security.UserGroupInformation: PriviledgedActionException as:admin (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2014-03-06 05:00:29,211|beaver.machine|INFO|2014-03-06 05:00:29,211 WARN [main] ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2014-03-06 05:00:29,211|beaver.machine|INFO|2014-03-06 05:00:29,211 WARN [main] security.UserGroupInformation: PriviledgedActionException as:admin (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2014-03-06 05:00:29,214|beaver.machine|INFO|2014-03-06 05:00:29,214 WARN [main] security.UserGroupInformation: PriviledgedActionException as:admin (auth:SIMPLE) cause:java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: 17n27.net/206.190.52.46; destination host is: 17n26.net:8020; {code} This was due to the test using admin as super user but this user wasn't setup on secure cluster. user hbase has been setup and was used to run the test. The test should use current user to be the admin, if it is different from admin. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10663) Some code cleanup of class Leases and ScannerListener.leaseExpired
[ https://issues.apache.org/jira/browse/HBASE-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925070#comment-13925070 ] Hudson commented on HBASE-10663: FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #111 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/111/]) HBASE-10663 Some code cleanup of class Leases and ScannerListener.leaseExpired (stack: rev 1575451) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Leases.java Some code cleanup of class Leases and ScannerListener.leaseExpired -- Key: HBASE-10663 URL: https://issues.apache.org/jira/browse/HBASE-10663 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Feng Honghua Assignee: Feng Honghua Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10663-trunk_v1.patch Some cleanup of Leases and ScannerListener.leaseExpired: # Reject renewLease if stopRequested (same as addLease, stopRequested means Leases is asked to stop and is waiting for all remained leases to expire) # Raise log level from info to warn for case that no related region scanner found when a lease expires (should it be an error?) # Replace System.currentTimeMillis() with EnvironmentEdgeManager.currentTimeMillis() # Correct some wrong comments and remove some irrelevant comments(Queue rather than Map is used for leases before?) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10706) Disable writeToWal in tests where possible
[ https://issues.apache.org/jira/browse/HBASE-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925082#comment-13925082 ] Hadoop QA commented on HBASE-10706: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12633568/10706-trunk-v1.txt against trunk revision . ATTACHMENT ID: 12633568 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 36 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8932//console This message is automatically generated. Disable writeToWal in tests where possible -- Key: HBASE-10706 URL: https://issues.apache.org/jira/browse/HBASE-10706 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Attachments: 10706-trunk-v1.txt See discussion in HBASE-10665. We should disable writeToAll in all tests except for those that test WAL specific stuff, in order to speed up the test suite. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10707) Backport parent issue to 0.96
[ https://issues.apache.org/jira/browse/HBASE-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-10707. --- Resolution: Fixed Fix Version/s: (was: 0.98.0) 0.96.2 Release Note: See parent issue for release note on how to use this new feature. Committed to 0.96 Backport parent issue to 0.96 - Key: HBASE-10707 URL: https://issues.apache.org/jira/browse/HBASE-10707 Project: HBase Issue Type: Sub-task Components: mapreduce, snapshots Reporter: stack Assignee: stack Fix For: 0.96.2 Attachments: 10707.txt -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10642) Add M/R over snapshots sample code to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925087#comment-13925087 ] stack commented on HBASE-10642: --- [~lhofhansl] I backported HBASE-8389 in HBASE-10707. This issue is about 0.94 now only. Add M/R over snapshots sample code to 0.94 -- Key: HBASE-10642 URL: https://issues.apache.org/jira/browse/HBASE-10642 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.94.18 Attachments: 10642-0.94-v2.txt, 10642-0.94.txt, SnapshotInputFormat.java I think we want drive towards all (or most) M/R over HBase to be against snapshots and HDFS directly. Adopting a simple input format (even if just as a sample) as part of HBase will allow us to direct users this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8304) Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port
[ https://issues.apache.org/jira/browse/HBASE-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925108#comment-13925108 ] haosdent commented on HBASE-8304: - {quote} attaching patch for 0.96 and 0.94 {quote} [~te...@apache.org] No problem, I would attach patchs later. Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port --- Key: HBASE-8304 URL: https://issues.apache.org/jira/browse/HBASE-8304 Project: HBase Issue Type: Bug Components: HFile, regionserver Affects Versions: 0.94.5 Reporter: Raymond Liu Assignee: haosdent Labels: bulkloader Fix For: 0.98.1, 0.99.0 Attachments: 8304-v4.patch, HBASE-8304-v2.patch, HBASE-8304-v3.patch, HBASE-8304.patch When fs.default.name or fs.defaultFS in hadoop core-site.xml is configured as hdfs://ip, and hbase.rootdir is configured as hdfs://ip:port/hbaserootdir where port is the hdfs namenode's default port. the bulkload operation will not remove the file in bulk output dir. Store::bulkLoadHfile will think hdfs:://ip and hdfs:://ip:port as different filesystem and go with copy approaching instead of rename. The root cause is that hbase master will rewrite fs.default.name/fs.defaultFS according to hbase.rootdir when regionserver started, thus, dest fs uri from the hregion will not matching src fs uri passed from client. any suggestion what is the best approaching to fix this issue? I kind of think that we could check for default port if src uri come without port info. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10642) Add M/R over snapshots sample code to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925109#comment-13925109 ] Lars Hofhansl commented on HBASE-10642: --- Thanks [~stack] Add M/R over snapshots sample code to 0.94 -- Key: HBASE-10642 URL: https://issues.apache.org/jira/browse/HBASE-10642 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.94.18 Attachments: 10642-0.94-v2.txt, 10642-0.94.txt, SnapshotInputFormat.java I think we want drive towards all (or most) M/R over HBase to be against snapshots and HDFS directly. Adopting a simple input format (even if just as a sample) as part of HBase will allow us to direct users this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10707) Backport parent issue to 0.96
[ https://issues.apache.org/jira/browse/HBASE-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925115#comment-13925115 ] Hudson commented on HBASE-10707: FAILURE: Integrated in hbase-0.96 #335 (See [https://builds.apache.org/job/hbase-0.96/335/]) HBASE-10707 Backport parent issue to 0.96 (stack: rev 1575645) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java * /hbase/branches/0.96/hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java * /hbase/branches/0.96/hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestTableSnapshotInputFormat.java * /hbase/branches/0.96/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MapReduceProtos.java * /hbase/branches/0.96/hbase-protocol/src/main/protobuf/MapReduce.proto * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/HDFSBlocksDistribution.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/client/ClientSideRegionScanner.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/AbstractHBaseTool.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java Backport parent issue to 0.96 - Key: HBASE-10707 URL: https://issues.apache.org/jira/browse/HBASE-10707 Project: HBase Issue Type: Sub-task Components: mapreduce, snapshots Reporter: stack Assignee: stack Fix For: 0.96.2 Attachments: 10707.txt -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10642) Add M/R over snapshots to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-10642: -- Summary: Add M/R over snapshots to 0.94 (was: Add M/R over snapshots sample code to 0.94) Add M/R over snapshots to 0.94 -- Key: HBASE-10642 URL: https://issues.apache.org/jira/browse/HBASE-10642 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.94.18 Attachments: 10642-0.94-v2.txt, 10642-0.94.txt, SnapshotInputFormat.java I think we want drive towards all (or most) M/R over HBase to be against snapshots and HDFS directly. Adopting a simple input format (even if just as a sample) as part of HBase will allow us to direct users this way. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10642) Add M/R over snapshots to 0.94
[ https://issues.apache.org/jira/browse/HBASE-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-10642: -- Attachment: 10642-0.94-v3.txt v3 should be close to the final version. Add M/R over snapshots to 0.94 -- Key: HBASE-10642 URL: https://issues.apache.org/jira/browse/HBASE-10642 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.94.18 Attachments: 10642-0.94-v2.txt, 10642-0.94-v3.txt, 10642-0.94.txt, SnapshotInputFormat.java I think we want drive towards all (or most) M/R over HBase to be against snapshots and HDFS directly. Adopting a simple input format (even if just as a sample) as part of HBase will allow us to direct users this way. -- This message was sent by Atlassian JIRA (v6.2#6252)