[jira] [Updated] (HBASE-8039) Make HDFS replication number configurable for a column family
[ https://issues.apache.org/jira/browse/HBASE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8039: - Fix Version/s: (was: 0.99.0) 2.0.0 Make HDFS replication number configurable for a column family - Key: HBASE-8039 URL: https://issues.apache.org/jira/browse/HBASE-8039 Project: HBase Issue Type: Improvement Components: HFile Reporter: Maryann Xue Priority: Minor Fix For: 2.0.0 To allow users to decide which column family's data is more important and which is less important by specifying a replica number instead of using the default replica number. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-6581) Build with hadoop.profile=3.0
[ https://issues.apache.org/jira/browse/HBASE-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-6581: - Fix Version/s: (was: 0.99.0) 0.99.1 2.0.0 Build with hadoop.profile=3.0 - Key: HBASE-6581 URL: https://issues.apache.org/jira/browse/HBASE-6581 Project: HBase Issue Type: Bug Reporter: Eric Charles Assignee: Eric Charles Fix For: 2.0.0, 0.99.1 Attachments: HBASE-6581-1.patch, HBASE-6581-2.patch, HBASE-6581-20130821.patch, HBASE-6581-3.patch, HBASE-6581-4.patch, HBASE-6581-5.patch, HBASE-6581-6.patch, HBASE-6581-7.patch, HBASE-6581-8-pom.patch, HBASE-6581.diff, HBASE-6581.diff Building trunk with hadoop.profile=3.0 gives exceptions (see [1]) due to change in the hadoop maven modules naming (and also usage of 3.0-SNAPSHOT instead of 3.0.0-SNAPSHOT in hbase-common). I can provide a patch that would move most of hadoop dependencies in their respective profiles and will define the correct hadoop deps in the 3.0 profile. Please tell me if that's ok to go this way. Thx, Eric [1] $ mvn clean install -Dhadoop.profile=3.0 [INFO] Scanning for projects... [ERROR] The build could not read 3 projects - [Help 1] [ERROR] [ERROR] The project org.apache.hbase:hbase-server:0.95-SNAPSHOT (/d/hbase.svn/hbase-server/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 655, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 659, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 663, column 21 [ERROR] [ERROR] The project org.apache.hbase:hbase-common:0.95-SNAPSHOT (/d/hbase.svn/hbase-common/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 170, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 174, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 178, column 21 [ERROR] [ERROR] The project org.apache.hbase:hbase-it:0.95-SNAPSHOT (/d/hbase.svn/hbase-it/pom.xml) has 3 errors [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-common:jar is missing. @ line 220, column 18 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-annotations:jar is missing. @ line 224, column 21 [ERROR] 'dependencies.dependency.version' for org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 228, column 21 [ERROR] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126653#comment-14126653 ] Hadoop QA commented on HBASE-8642: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585796/8642-trunk-0.95-v2.patch against trunk revision . ATTACHMENT ID: 12585796 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10772//console This message is automatically generated. [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.99.1 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 8642-trunk-0.95-v2.patch Support list and delete snapshot by table name. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-6562) Fake KVs are sometimes passed to filters
[ https://issues.apache.org/jira/browse/HBASE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-6562: - Fix Version/s: (was: 0.99.0) 0.99.1 Fake KVs are sometimes passed to filters Key: HBASE-6562 URL: https://issues.apache.org/jira/browse/HBASE-6562 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.99.1 Attachments: 6562-0.94-v1.txt, 6562-0.96-v1.txt, 6562-v2.txt, 6562-v3.txt, 6562-v4.txt, 6562-v5.txt, 6562.txt, minimalTest.java In internal tests at Salesforce we found that fake row keys sometimes are passed to filters (Filter.filterRowKey(...) specifically). The KVs are eventually filtered by the StoreScanner/ScanQueryMatcher, but the row key is passed to filterRowKey in RegionScannImpl *before* that happens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-6416) hbck dies on NPE when a region folder exists but the table does not
[ https://issues.apache.org/jira/browse/HBASE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-6416: - Fix Version/s: (was: 0.99.0) 0.99.1 hbck dies on NPE when a region folder exists but the table does not --- Key: HBASE-6416 URL: https://issues.apache.org/jira/browse/HBASE-6416 Project: HBase Issue Type: Bug Reporter: Jean-Daniel Cryans Assignee: Jonathan Hsieh Fix For: 0.99.1 Attachments: hbase-6416-v1.patch, hbase-6416.patch This is what I'm getting for leftover data that has no .regioninfo First: {quote} 12/07/17 23:13:37 WARN util.HBaseFsck: Failed to read .regioninfo file for region null java.io.FileNotFoundException: File does not exist: /hbase/stumble_info_urlid_user/bd5f6cfed674389b4d7b8c1be227cb46/.regioninfo at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1822) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.init(DFSClient.java:1813) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:544) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:187) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:456) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:611) at org.apache.hadoop.hbase.util.HBaseFsck.access$2200(HBaseFsck.java:140) at org.apache.hadoop.hbase.util.HBaseFsck$WorkItemHdfsRegionInfo.call(HBaseFsck.java:2882) at org.apache.hadoop.hbase.util.HBaseFsck$WorkItemHdfsRegionInfo.call(HBaseFsck.java:2866) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {quote} Then it hangs on: {quote} 12/07/17 23:13:39 INFO util.HBaseFsck: Attempting to handle orphan hdfs dir: hdfs://sfor3s24:10101/hbase/stumble_info_urlid_user/bd5f6cfed674389b4d7b8c1be227cb46 12/07/17 23:13:39 INFO util.HBaseFsck: checking orphan for table null Exception in thread main java.lang.NullPointerException at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$100(HBaseFsck.java:1634) at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:435) at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:408) at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:529) at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:313) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:386) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3227) {quote} The NPE is sent by: {code} Preconditions.checkNotNull(Table + tableName + ' not present!, tableInfo); {code} I wonder why the condition checking was added if we don't handle it... In any case hbck dies but it hangs because there are some non-daemon hanging around. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5839) Backup master not starting up due to Bind Exception while starting HttpServer
[ https://issues.apache.org/jira/browse/HBASE-5839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5839: - Fix Version/s: (was: 0.99.0) 2.0.0 Backup master not starting up due to Bind Exception while starting HttpServer - Key: HBASE-5839 URL: https://issues.apache.org/jira/browse/HBASE-5839 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Fix For: 2.0.0 Backup master tries to bind to the info port 60010. This is done once the back up master becomes active. Even before that the Data Xceviers threads (IPC handlers) are started and they are started at random port. If already 60010 is used then when standby master comes up then it fails due to bind exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5617) Provide coprocessor hooks in put flow while rollbackMemstore.
[ https://issues.apache.org/jira/browse/HBASE-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5617: - Fix Version/s: (was: 0.99.0) 0.99.1 2.0.0 Provide coprocessor hooks in put flow while rollbackMemstore. - Key: HBASE-5617 URL: https://issues.apache.org/jira/browse/HBASE-5617 Project: HBase Issue Type: Improvement Components: Coprocessors Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 2.0.0, 0.99.1 Attachments: HBASE-5617_1.patch, HBASE-5617_2.patch With coprocessors hooks while put happens we have the provision to create new puts to other tables or regions. These puts can be done with writeToWal as false. In 0.94 and above the puts are first written to memstore and then to WAL. If any failure in the WAL append or sync the memstore is rollbacked. Now the problem is that if the put that happens in the main flow fails there is no way to rollback the puts that happened in the prePut. We can add coprocessor hooks to like pre/postRoolBackMemStore. Is any one hook enough here? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-5583) Master restart on create table with splitkeys does not recreate table with all the splitkey regions
[ https://issues.apache.org/jira/browse/HBASE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5583: - Fix Version/s: (was: 0.99.0) 0.99.1 Master restart on create table with splitkeys does not recreate table with all the splitkey regions --- Key: HBASE-5583 URL: https://issues.apache.org/jira/browse/HBASE-5583 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.99.1 Attachments: HBASE-5583_new_1.patch, HBASE-5583_new_1_review.patch, HBASE-5583_new_2.patch, HBASE-5583_new_4_WIP.patch, HBASE-5583_new_5_WIP_using_tableznode.patch - Create table using splitkeys - MAster goes down before all regions are added to meta - On master restart the table is again enabled but with less number of regions than specified in splitkeys Anyway client will get an exception if i had called sync create table. But table exists or not check will say table exists. Is this scenario to be handled by client only or can we have some mechanism on the master side for this? Pls suggest. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-8035) Add site target check to precommit tests
[ https://issues.apache.org/jira/browse/HBASE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-8035. -- Resolution: Fixed This has been working for som time. Resolving. Add site target check to precommit tests Key: HBASE-8035 URL: https://issues.apache.org/jira/browse/HBASE-8035 Project: HBase Issue Type: Task Reporter: Andrew Purtell Assignee: stack Fix For: 0.99.0 Attachments: 0001-HBASE-8035-Add-site-generation-to-patch-validation.patch, 8035-addendum-remove-clean.txt, 8035-addendum.txt, 8035-addendum.txt, addendum.out, addendum_adds_-N.txt We should check that the Maven 'site' target passes as part of precommit testing. See HBASE-8022. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-8329) Limit compaction speed
[ https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8329: - Fix Version/s: (was: 0.99.0) 0.99.1 2.0.0 Limit compaction speed -- Key: HBASE-8329 URL: https://issues.apache.org/jira/browse/HBASE-8329 Project: HBase Issue Type: Improvement Components: Compaction Reporter: binlijin Assignee: Sergey Shelukhin Fix For: 2.0.0, 0.99.1 Attachments: HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, HBASE-8329-trunk.patch There is no speed or resource limit for compaction,I think we should add this feature especially when request burst. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7386) Investigate providing some supervisor support for znode deletion
[ https://issues.apache.org/jira/browse/HBASE-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7386: - Fix Version/s: (was: 0.99.0) 2.0.0 Investigate providing some supervisor support for znode deletion Key: HBASE-7386 URL: https://issues.apache.org/jira/browse/HBASE-7386 Project: HBase Issue Type: Task Components: master, regionserver, scripts Reporter: Gregory Chanan Assignee: stack Priority: Blocker Fix For: 2.0.0 Attachments: HBASE-7386-bin-v2.patch, HBASE-7386-bin-v3.patch, HBASE-7386-bin.patch, HBASE-7386-conf-v2.patch, HBASE-7386-conf-v3.patch, HBASE-7386-conf.patch, HBASE-7386-src.patch, HBASE-7386-v0.patch, supervisordconfigs-v0.patch There a couple of JIRAs for deleting the znode on a process failure: HBASE-5844 (RS) HBASE-5926 (Master) which are pretty neat; on process failure, they delete the znode of the underlying process so HBase can recover faster. These JIRAs were implemented via the startup scripts; i.e. the script hangs around and waits for the process to exit, then deletes the znode. There are a few problems associated with this approach, as listed in the below JIRAs: 1) Hides startup output in script https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463401page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463401 2) two hbase processes listed per launched daemon https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463409page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463409 3) Not run by a real supervisor https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463409page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463409 4) Weird output after kill -9 actual process in standalone mode https://issues.apache.org/jira/browse/HBASE-5926?focusedCommentId=13506801page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13506801 5) Can kill existing RS if called again https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463401page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463401 6) Hides stdout/stderr[6] https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13506832page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13506832 I suspect running in via something like supervisor.d can solve these issues if we provide the right support. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8329) Limit compaction speed
[ https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126659#comment-14126659 ] Enis Soztutar commented on HBASE-8329: -- It seem that this has fallen through the cracks. Any takers? Limit compaction speed -- Key: HBASE-8329 URL: https://issues.apache.org/jira/browse/HBASE-8329 Project: HBase Issue Type: Improvement Components: Compaction Reporter: binlijin Assignee: Sergey Shelukhin Fix For: 2.0.0, 0.99.1 Attachments: HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, HBASE-8329-trunk.patch There is no speed or resource limit for compaction,I think we should add this feature especially when request burst. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7245) Recovery on failed snapshot restore
[ https://issues.apache.org/jira/browse/HBASE-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7245: - Fix Version/s: (was: 0.99.0) 0.99.1 Recovery on failed snapshot restore --- Key: HBASE-7245 URL: https://issues.apache.org/jira/browse/HBASE-7245 Project: HBase Issue Type: Bug Components: Client, master, regionserver, snapshots, Zookeeper Reporter: Jonathan Hsieh Assignee: Matteo Bertozzi Fix For: 0.99.1 Restore will do updates to the file system and to meta. it seems that an inopportune failure before meta is completely updated could result in an inconsistent state that would require hbck to fix. We should define what the semantics are for recovering from this. Some suggestions: 1) Fail Forward (see some log saying restore's meta edits not completed, then gather information necessary to build it all from fs, and complete meta edits.). 2) Fail backwards (see some log saying restore's meta edits not completed, delete incomplete snapshot region entries from meta.) I think I prefer 1 -- if two processes end somehow updating (somehow the original master didn't die, and a new one started up) they would be idempotent. If we used 2, we could still have a race and still be in a bad place. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11692) Document how and why to do a manual region split
[ https://issues.apache.org/jira/browse/HBASE-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-11692: Summary: Document how and why to do a manual region split (was: Document how to do a manual region split) Document how and why to do a manual region split Key: HBASE-11692 URL: https://issues.apache.org/jira/browse/HBASE-11692 Project: HBase Issue Type: Task Components: documentation Reporter: Misty Stanley-Jones Assignee: Misty Stanley-Jones {quote} -- Forwarded message -- From: Liu, Ming (HPIT-GADSC) ming.l...@hp.com Date: Tue, Aug 5, 2014 at 11:28 PM Subject: Why hbase need manual split? To: u...@hbase.apache.org u...@hbase.apache.org Hi, all, As I understand, HBase will automatically split a region when the region is too big. So in what scenario, user needs to do a manual split? Could someone kindly give me some examples that user need to do the region split explicitly via HBase Shell or Java API? Thanks very much. Regards, Ming {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11692) Document how and why to do a manual region split
[ https://issues.apache.org/jira/browse/HBASE-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-11692: Status: Patch Available (was: Open) Document how and why to do a manual region split Key: HBASE-11692 URL: https://issues.apache.org/jira/browse/HBASE-11692 Project: HBase Issue Type: Task Components: documentation Reporter: Misty Stanley-Jones Assignee: Misty Stanley-Jones Attachments: HBASE-11692.patch {quote} -- Forwarded message -- From: Liu, Ming (HPIT-GADSC) ming.l...@hp.com Date: Tue, Aug 5, 2014 at 11:28 PM Subject: Why hbase need manual split? To: u...@hbase.apache.org u...@hbase.apache.org Hi, all, As I understand, HBase will automatically split a region when the region is too big. So in what scenario, user needs to do a manual split? Could someone kindly give me some examples that user need to do the region split explicitly via HBase Shell or Java API? Thanks very much. Regards, Ming {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-5583) Master restart on create table with splitkeys does not recreate table with all the splitkey regions
[ https://issues.apache.org/jira/browse/HBASE-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126664#comment-14126664 ] Hadoop QA commented on HBASE-5583: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576723/HBASE-5583_new_1_review.patch against trunk revision . ATTACHMENT ID: 12576723 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10778//console This message is automatically generated. Master restart on create table with splitkeys does not recreate table with all the splitkey regions --- Key: HBASE-5583 URL: https://issues.apache.org/jira/browse/HBASE-5583 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.99.1 Attachments: HBASE-5583_new_1.patch, HBASE-5583_new_1_review.patch, HBASE-5583_new_2.patch, HBASE-5583_new_4_WIP.patch, HBASE-5583_new_5_WIP_using_tableznode.patch - Create table using splitkeys - MAster goes down before all regions are added to meta - On master restart the table is again enabled but with less number of regions than specified in splitkeys Anyway client will get an exception if i had called sync create table. But table exists or not check will say table exists. Is this scenario to be handled by client only or can we have some mechanism on the master side for this? Pls suggest. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7108) Don't use legal family name for system folder at region level
[ https://issues.apache.org/jira/browse/HBASE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7108: - Fix Version/s: (was: 0.99.0) 2.0.0 Don't use legal family name for system folder at region level - Key: HBASE-7108 URL: https://issues.apache.org/jira/browse/HBASE-7108 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.92.2, 0.94.2, 0.95.2 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Fix For: 2.0.0 Attachments: HBASE-7108-v0.patch CHANGED, was: Don't allow recovered.edits as legal family name Region directories can contain folders called recovered.edits, log splitting related. But there's nothing that prevent a user to create a family with that name... HLog.RECOVERED_EDITS_DIR = recovered.edits; HRegion.MERGEDIR = merges; // fixed with HBASE-6158 SplitTransaction.SPLITDIR = splits; // fixed with HBASE-6158 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11692) Document how and why to do a manual region split
[ https://issues.apache.org/jira/browse/HBASE-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misty Stanley-Jones updated HBASE-11692: Attachment: HBASE-11692.patch Made a first attempt. Document how and why to do a manual region split Key: HBASE-11692 URL: https://issues.apache.org/jira/browse/HBASE-11692 Project: HBase Issue Type: Task Components: documentation Reporter: Misty Stanley-Jones Assignee: Misty Stanley-Jones Attachments: HBASE-11692.patch {quote} -- Forwarded message -- From: Liu, Ming (HPIT-GADSC) ming.l...@hp.com Date: Tue, Aug 5, 2014 at 11:28 PM Subject: Why hbase need manual split? To: u...@hbase.apache.org u...@hbase.apache.org Hi, all, As I understand, HBase will automatically split a region when the region is too big. So in what scenario, user needs to do a manual split? Could someone kindly give me some examples that user need to do the region split explicitly via HBase Shell or Java API? Thanks very much. Regards, Ming {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-6562) Fake KVs are sometimes passed to filters
[ https://issues.apache.org/jira/browse/HBASE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126663#comment-14126663 ] Hadoop QA commented on HBASE-6562: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563194/6562-0.96-v1.txt against trunk revision . ATTACHMENT ID: 12563194 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10777//console This message is automatically generated. Fake KVs are sometimes passed to filters Key: HBASE-6562 URL: https://issues.apache.org/jira/browse/HBASE-6562 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.99.1 Attachments: 6562-0.94-v1.txt, 6562-0.96-v1.txt, 6562-v2.txt, 6562-v3.txt, 6562-v4.txt, 6562-v5.txt, 6562.txt, minimalTest.java In internal tests at Salesforce we found that fake row keys sometimes are passed to filters (Filter.filterRowKey(...) specifically). The KVs are eventually filtered by the StoreScanner/ScanQueryMatcher, but the row key is passed to filterRowKey in RegionScannImpl *before* that happens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-5617) Provide coprocessor hooks in put flow while rollbackMemstore.
[ https://issues.apache.org/jira/browse/HBASE-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126665#comment-14126665 ] Hadoop QA commented on HBASE-5617: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12520451/HBASE-5617_2.patch against trunk revision . ATTACHMENT ID: 12520451 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10775//console This message is automatically generated. Provide coprocessor hooks in put flow while rollbackMemstore. - Key: HBASE-5617 URL: https://issues.apache.org/jira/browse/HBASE-5617 Project: HBase Issue Type: Improvement Components: Coprocessors Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 2.0.0, 0.99.1 Attachments: HBASE-5617_1.patch, HBASE-5617_2.patch With coprocessors hooks while put happens we have the provision to create new puts to other tables or regions. These puts can be done with writeToWal as false. In 0.94 and above the puts are first written to memstore and then to WAL. If any failure in the WAL append or sync the memstore is rollbacked. Now the problem is that if the put that happens in the main flow fails there is no way to rollback the puts that happened in the prePut. We can add coprocessor hooks to like pre/postRoolBackMemStore. Is any one hook enough here? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-6970) hbase-deamon.sh creates/updates pid file even when that start failed.
[ https://issues.apache.org/jira/browse/HBASE-6970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-6970: - Fix Version/s: (was: 0.99.0) 0.99.1 hbase-deamon.sh creates/updates pid file even when that start failed. - Key: HBASE-6970 URL: https://issues.apache.org/jira/browse/HBASE-6970 Project: HBase Issue Type: Bug Components: Usability Reporter: Lars Hofhansl Fix For: 0.99.1 We just ran into a strange issue where could neither start nor stop services with hbase-deamon.sh. The problem is this: {code} nohup nice -n $HBASE_NICENESS $HBASE_HOME/bin/hbase \ --config ${HBASE_CONF_DIR} \ $command $@ $startStop $logout 21 /dev/null echo $! $pid {code} So the pid file is created or updated even when the start of the service failed. The next stop command will then fail, because the pid file has the wrong pid in it. Edit: Spelling and more spelling errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-6618) Implement FuzzyRowFilter with ranges support
[ https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-6618: - Fix Version/s: (was: 0.99.0) 0.99.1 Implement FuzzyRowFilter with ranges support Key: HBASE-6618 URL: https://issues.apache.org/jira/browse/HBASE-6618 Project: HBase Issue Type: New Feature Components: Filters Reporter: Alex Baranau Assignee: Alex Baranau Priority: Minor Fix For: 0.99.1 Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch, HBASE-6618_5.patch Apart from current ability to specify fuzzy row filter e.g. for userId_actionId format as _0004 (where 0004 - actionId) it would be great to also have ability to specify the fuzzy range , e.g. _0004, ..., _0099. See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65 Note: currently it is possible to provide multiple fuzzy row rules to existing FuzzyRowFilter, but in case when the range is big (contains thousands of values) it is not efficient. Filter should perform efficient fast-forwarding during the scan (this is what distinguishes it from regex row filter). While such functionality may seem like a proper fit for custom filter (i.e. not including into standard filter set) it looks like the filter may be very re-useable. We may judge based on the implementation that will hopefully be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-8309) TableLockManager: allow lock timeout to be set at the individual lock level
[ https://issues.apache.org/jira/browse/HBASE-8309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8309: - Fix Version/s: (was: 0.99.0) 2.0.0 TableLockManager: allow lock timeout to be set at the individual lock level --- Key: HBASE-8309 URL: https://issues.apache.org/jira/browse/HBASE-8309 Project: HBase Issue Type: Improvement Components: master Reporter: Jerry He Assignee: Jerry He Priority: Minor Fix For: 2.0.0 Attachments: HBASE-8309-v2.patch, HBASE-8309-v3.patch, HBASE-8309.patch Currently the lock timeout values and defaults are set and shared at the TableLockManager level. One TableLockManager is shared on the master. We should allow the lock timeout to be set at individual lock level when we instantiate the lock. Components using the locks may have different timeout preferences. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7579) HTableDescriptor equals method fails if results are returned in a different order
[ https://issues.apache.org/jira/browse/HBASE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7579: - Status: Open (was: Patch Available) HTableDescriptor equals method fails if results are returned in a different order - Key: HBASE-7579 URL: https://issues.apache.org/jira/browse/HBASE-7579 Project: HBase Issue Type: Bug Components: Admin Affects Versions: 0.95.0, 0.94.6 Reporter: Aleksandr Shulman Assignee: Aleksandr Shulman Priority: Minor Fix For: 2.0.0 Attachments: HBASE-7579-0.94.patch, HBASE-7579-v1.patch, HBASE-7579-v2.patch, HBASE-7579-v3.patch, HBASE-7579-v4.patch, HBASE-7579-v5.patch HTableDescriptor's compareTo function compares a set of HColumnDescriptors against another set of HColumnDescriptors. It iterates through both, relying on the fact that they will be in the same order. In my testing, I may have seen this issue come up, so I decided to fix it. It's a straightforward fix. I convert the sets into a hashset for O(1) lookups (at least in theory), then I check that all items in the first set are found in the second. Since the sizes are the same, we know that if all elements showed up in the second set, then they must be equal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7579) HTableDescriptor equals method fails if results are returned in a different order
[ https://issues.apache.org/jira/browse/HBASE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7579: - Fix Version/s: (was: 0.99.0) 2.0.0 HTableDescriptor equals method fails if results are returned in a different order - Key: HBASE-7579 URL: https://issues.apache.org/jira/browse/HBASE-7579 Project: HBase Issue Type: Bug Components: Admin Affects Versions: 0.94.6, 0.95.0 Reporter: Aleksandr Shulman Assignee: Aleksandr Shulman Priority: Minor Fix For: 2.0.0 Attachments: HBASE-7579-0.94.patch, HBASE-7579-v1.patch, HBASE-7579-v2.patch, HBASE-7579-v3.patch, HBASE-7579-v4.patch, HBASE-7579-v5.patch HTableDescriptor's compareTo function compares a set of HColumnDescriptors against another set of HColumnDescriptors. It iterates through both, relying on the fact that they will be in the same order. In my testing, I may have seen this issue come up, so I decided to fix it. It's a straightforward fix. I convert the sets into a hashset for O(1) lookups (at least in theory), then I check that all items in the first set are found in the second. Since the sizes are the same, we know that if all elements showed up in the second set, then they must be equal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11445) TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky
[ https://issues.apache.org/jira/browse/HBASE-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126670#comment-14126670 ] Hudson commented on HBASE-11445: FAILURE: Integrated in HBase-TRUNK #5479 (See [https://builds.apache.org/job/HBase-TRUNK/5479/]) HBASE-11445 TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky (Jeffrey Zhong) (enis: rev 62d1ae12c29a154e128625a4c82c1eef83b4b753) * hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky -- Key: HBASE-11445 URL: https://issues.apache.org/jira/browse/HBASE-11445 Project: HBase Issue Type: Bug Components: snapshots Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: failure.txt, hbase-11445.patch Recently there is a failure from Jenkins build:https://builds.apache.org/job/HBase-0.98/364/testReport/junit/org.apache.hadoop.hbase.procedure/TestZKProcedure/testMultiCohortWithMemberTimeoutDuringPrepare/. Below are related log message and Member: 'one' joining twice: {noformat} 2014-06-29 19:26:34,101 DEBUG [member: 'three' subprocedure-pool11-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.Subprocedure(162): Subprocedure 'op' locally acquired 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser
[ https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7115: - Fix Version/s: (was: 0.99.0) 0.99.1 [shell] Provide a way to register custom filters with the Filter Language Parser Key: HBASE-7115 URL: https://issues.apache.org/jira/browse/HBASE-7115 Project: HBase Issue Type: Improvement Components: Filters, shell Affects Versions: 0.95.2 Reporter: Aditya Kishore Assignee: Aditya Kishore Fix For: 0.99.1 Attachments: HBASE-7115_trunk.patch, HBASE-7115_trunk.patch, HBASE-7115_trunk_v2.patch HBASE-5428 added this capability to thrift interface but the configuration parameter name is thrift specific. This patch introduces a more generic parameter hbase.user.filters using which the user defined custom filters can be specified in the configuration and loaded in any client that needs to use the filter language parser. The patch then uses this new parameter to register any user specified filters while invoking the HBase shell. Example usage: Let's say I have written a couple of custom filters with class names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to use them from HBase shell using the filter language. To do that, I would add the following configuration to {{hbase-site.xml}} {panel}{{property}} {{ namehbase.user.filters/name}} {{ value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}} {{/property}}{panel} Once this is configured, I can launch HBase shell and use these filters in my {{get}} or {{scan}} just the way I would use a built-in filter. {code} hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND SilverBulletFilter(42)} ROW COLUMN+CELL status column=cf:a, timestamp=30438552, value=world_peace 1 row(s) in 0. seconds {code} To use this feature in any client, the client needs to make the following function call as part of its initialization. {code} ParseFilter.registerUserFilters(configuration); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
[ https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9003: - Fix Version/s: (was: 0.99.0) 0.99.1 TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar - Key: HBASE-9003 URL: https://issues.apache.org/jira/browse/HBASE-9003 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Fix For: 2.0.0, 0.99.1 Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, HBASE-9003.v2.patch This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. However {{getJar()}} uses File.createTempFile() to create a temporary file under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar and its content is not purged after the JVM is destroyed. Since most configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar files get purged by {{tmpwatch}} or a similar tool, but boxes that have {{hadoop.tmp.dir}} pointing to a different location not monitored by {{tmpwatch}} will pile up a collection of jars causing all kind of issues. Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] comment on HADOOP-9737) we shouldn't use that as part of {{TableMapReduceUtil}} in order to avoid this kind of issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser
[ https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126676#comment-14126676 ] Hadoop QA commented on HBASE-7115: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12581929/HBASE-7115_trunk_v2.patch against trunk revision . ATTACHMENT ID: 12581929 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10782//console This message is automatically generated. [shell] Provide a way to register custom filters with the Filter Language Parser Key: HBASE-7115 URL: https://issues.apache.org/jira/browse/HBASE-7115 Project: HBase Issue Type: Improvement Components: Filters, shell Affects Versions: 0.95.2 Reporter: Aditya Kishore Assignee: Aditya Kishore Fix For: 0.99.1 Attachments: HBASE-7115_trunk.patch, HBASE-7115_trunk.patch, HBASE-7115_trunk_v2.patch HBASE-5428 added this capability to thrift interface but the configuration parameter name is thrift specific. This patch introduces a more generic parameter hbase.user.filters using which the user defined custom filters can be specified in the configuration and loaded in any client that needs to use the filter language parser. The patch then uses this new parameter to register any user specified filters while invoking the HBase shell. Example usage: Let's say I have written a couple of custom filters with class names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to use them from HBase shell using the filter language. To do that, I would add the following configuration to {{hbase-site.xml}} {panel}{{property}} {{ namehbase.user.filters/name}} {{ value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}} {{/property}}{panel} Once this is configured, I can launch HBase shell and use these filters in my {{get}} or {{scan}} just the way I would use a built-in filter. {code} hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND SilverBulletFilter(42)} ROW COLUMN+CELL status column=cf:a, timestamp=30438552, value=world_peace 1 row(s) in 0. seconds {code} To use this feature in any client, the client needs to make the following function call as part of its initialization. {code} ParseFilter.registerUserFilters(configuration); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
[ https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9003: - Fix Version/s: 2.0.0 TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar - Key: HBASE-9003 URL: https://issues.apache.org/jira/browse/HBASE-9003 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Fix For: 2.0.0, 0.99.1 Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, HBASE-9003.v2.patch This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. However {{getJar()}} uses File.createTempFile() to create a temporary file under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar and its content is not purged after the JVM is destroyed. Since most configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar files get purged by {{tmpwatch}} or a similar tool, but boxes that have {{hadoop.tmp.dir}} pointing to a different location not monitored by {{tmpwatch}} will pile up a collection of jars causing all kind of issues. Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] comment on HADOOP-9737) we shouldn't use that as part of {{TableMapReduceUtil}} in order to avoid this kind of issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
[ https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126675#comment-14126675 ] Enis Soztutar commented on HBASE-9003: -- [~ndimiduk], [~saint@gmail.com] do you guys want to commit this? TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar - Key: HBASE-9003 URL: https://issues.apache.org/jira/browse/HBASE-9003 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Fix For: 2.0.0, 0.99.1 Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, HBASE-9003.v2.patch This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. However {{getJar()}} uses File.createTempFile() to create a temporary file under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar and its content is not purged after the JVM is destroyed. Since most configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar files get purged by {{tmpwatch}} or a similar tool, but boxes that have {{hadoop.tmp.dir}} pointing to a different location not monitored by {{tmpwatch}} will pile up a collection of jars causing all kind of issues. Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] comment on HADOOP-9737) we shouldn't use that as part of {{TableMapReduceUtil}} in order to avoid this kind of issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-8942) DFS errors during a read operation (get/scan), may cause write outliers
[ https://issues.apache.org/jira/browse/HBASE-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8942: - Fix Version/s: (was: 0.99.0) 0.99.1 DFS errors during a read operation (get/scan), may cause write outliers --- Key: HBASE-8942 URL: https://issues.apache.org/jira/browse/HBASE-8942 Project: HBase Issue Type: Bug Affects Versions: 0.89-fb, 0.95.2 Reporter: Amitanand Aiyer Assignee: Amitanand Aiyer Priority: Minor Fix For: 0.89-fb, 0.99.1 Attachments: 8942.094.txt, 8942.096.txt, HBase-8942.txt This is a similar issue as discussed in HBASE-8228 1) A scanner holds the Store.ReadLock() while opening the store files ... encounters errors. Thus, takes a long time to finish. 2) A flush is completed, in the mean while. It needs the write lock to commit(), and update scanners. Hence ends up waiting. 3+) All Puts (and also Gets) to the CF, which will need a read lock, will have to wait for 1) and 2) to complete. Thus blocking updates to the system for the DFS timeout. Fix: Open Store files outside the read lock. getScanners() already tries to do this optimisation. However, Store.getScanner() which calls this functions through the StoreScanner constructor, redundantly tries to grab the readLock. Causing the readLock to be held while the storeFiles are being opened, and seeked. We should get rid of the readLock() in Store.getScanner(). This is not required. The constructor for StoreScanner calls getScanners(xxx, xxx, xxx). This has the required locking already. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11367) Pluggable replication endpoint
[ https://issues.apache.org/jira/browse/HBASE-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126679#comment-14126679 ] ramkrishna.s.vasudevan commented on HBASE-11367: There are few changes to the ReplicationPeer and ReplicationPeers interface though they are marked private is it ok to change the interface in 0.98.7 releases? In general what could be the best policy that could be applied here? May be I can see if there is anyother different way to do it but just asking this because after we back port it in 0.98.7 again in 1.0 it should not have different interface APIs. Pluggable replication endpoint -- Key: HBASE-11367 URL: https://issues.apache.org/jira/browse/HBASE-11367 Project: HBase Issue Type: Sub-task Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Blocker Fix For: 0.99.0, 2.0.0 Attachments: 0001-11367.patch, hbase-11367_v1.patch, hbase-11367_v2.patch, hbase-11367_v3.patch, hbase-11367_v4.patch, hbase-11367_v4.patch, hbase-11367_v5.patch We need a pluggable endpoint for replication for more flexibility. See parent jira for more context. ReplicationSource tails the logs for each peer. This jira introduces ReplicationEndpoint which is customizable per peer. ReplicationEndpoint is run in the same RS process and instantiated per replication peer per region server. Implementations of this interface handle the actual shipping of WAL edits to the remote cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-7108) Don't use legal family name for system folder at region level
[ https://issues.apache.org/jira/browse/HBASE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126678#comment-14126678 ] Hadoop QA commented on HBASE-7108: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12552359/HBASE-7108-v0.patch against trunk revision . ATTACHMENT ID: 12552359 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10780//console This message is automatically generated. Don't use legal family name for system folder at region level - Key: HBASE-7108 URL: https://issues.apache.org/jira/browse/HBASE-7108 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.92.2, 0.94.2, 0.95.2 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Fix For: 2.0.0 Attachments: HBASE-7108-v0.patch CHANGED, was: Don't allow recovered.edits as legal family name Region directories can contain folders called recovered.edits, log splitting related. But there's nothing that prevent a user to create a family with that name... HLog.RECOVERED_EDITS_DIR = recovered.edits; HRegion.MERGEDIR = merges; // fixed with HBASE-6158 SplitTransaction.SPLITDIR = splits; // fixed with HBASE-6158 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10414) Distributed replay should re-encode WAL entries with its own RPC codec
[ https://issues.apache.org/jira/browse/HBASE-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10414: -- Fix Version/s: (was: 0.99.0) 2.0.0 Distributed replay should re-encode WAL entries with its own RPC codec -- Key: HBASE-10414 URL: https://issues.apache.org/jira/browse/HBASE-10414 Project: HBase Issue Type: Improvement Affects Versions: 0.98.0 Reporter: Andrew Purtell Fix For: 2.0.0 HBASE-10412 allows distributed replay to send WAL entries with tags intact between RegionServers by substituting a WALCodec directly for the RPC codec. We should instead have distributed replay handle WAL entries including tags with its own tag-aware RPC codec and drop the direct use of WALCodecs for that purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-9942) hbase Scanner specifications accepting wrong specifier and then after scan using correct specifier returning unexpected result
[ https://issues.apache.org/jira/browse/HBASE-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9942: - Fix Version/s: (was: 0.99.0) 2.0.0 hbase Scanner specifications accepting wrong specifier and then after scan using correct specifier returning unexpected result --- Key: HBASE-9942 URL: https://issues.apache.org/jira/browse/HBASE-9942 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.96.0, 0.94.13 Reporter: Deepak Sharma Priority: Minor Fix For: 2.0.0 check the given scenerio: 1. log in to hbase client -- ./hbase shell 2. created table 'tab1' hbase(main):001:0 create 'tab1' , 'fa1' 3. put some 10 rows (row1 to row10) in table 'tab1' 4. run the scan for table 'tab1' as follows: hbase(main):013:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' } ROW COLUMN+CELL row4 column=fa1:col1, timestamp=1384164182738, value=value1 row5 column=fa1:col1, timestamp=1384164188396, value=value1 row6 column=fa1:col1, timestamp=1384164192395, value=value1 row7 column=fa1:col1, timestamp=1384164197693, value=value1 row8 column=fa1:col1, timestamp=1384164203237, value=value1 5 row(s) in 0.0540 seconds so result was expected , rows from 'row4' to 'row8' are displayed 5. then run the scan using wrong specifier ( '=' instead of '=') so get wrong result hbase(main):014:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' } ROW COLUMN+CELL row1 column=fa1:col1, timestamp=1384164167838, value=value1 row10column=fa1:col1, timestamp=1384164212615, value=value1 row2 column=fa1:col1, timestamp=1384164175337, value=value1 row3 column=fa1:col1, timestamp=1384164179068, value=value1 row4 column=fa1:col1, timestamp=1384164182738, value=value1 row5 column=fa1:col1, timestamp=1384164188396, value=value1 row6 column=fa1:col1, timestamp=1384164192395, value=value1 row7 column=fa1:col1, timestamp=1384164197693, value=value1 row8 column=fa1:col1, timestamp=1384164203237, value=value1 row9 column=fa1:col1, timestamp=1384164208375, value=value1 10 row(s) in 0.0390 seconds 6. now performed correct scan query with correct specifier ( used '=' as specifier) hbase(main):015:0 scan 'tab1' , { STARTROW = 'row4' , STOPROW = 'row9' } ROW COLUMN+CELL row1 column=fa1:col1, timestamp=1384164167838, value=value1 row10column=fa1:col1, timestamp=1384164212615, value=value1 row2
[jira] [Commented] (HBASE-9542) Have Get and MultiGet do cellblocks, currently they are pb all the time
[ https://issues.apache.org/jira/browse/HBASE-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126680#comment-14126680 ] Enis Soztutar commented on HBASE-9542: -- [~stack] any more insight into this? Have Get and MultiGet do cellblocks, currently they are pb all the time --- Key: HBASE-9542 URL: https://issues.apache.org/jira/browse/HBASE-9542 Project: HBase Issue Type: Improvement Reporter: stack Priority: Critical Fix For: 0.99.0 Probably better if we cellblock Gets and MultiGets rather than pb the results. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-8533) HBaseAdmin does not ride over cluster restart
[ https://issues.apache.org/jira/browse/HBASE-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8533: - Fix Version/s: (was: 0.99.0) 0.99.1 HBaseAdmin does not ride over cluster restart - Key: HBASE-8533 URL: https://issues.apache.org/jira/browse/HBASE-8533 Project: HBase Issue Type: Improvement Components: Client Affects Versions: 0.98.0, 0.95.0 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.99.1 Attachments: 8533-0.95-v1.patch, 8533-trunk-v1.patch, hbase-8533-trunk-v0.patch For Restful servlet (org.apache.hadoop.hbase.rest.Main (0.94), org.apache.hadoop.hbase.rest.RESTServer (trunk)) on Jetty, we need to first explicitly start the service (% ./bin/hbase-daemon.sh start rest -p 8000 ) for application running. Here is a scenario, sometimes, HBase cluster are stopped/started for maintanence, but rest is a seperated standalone process, which binds the HBaseAdmin at construction method. HBase stop/start cause this binding lost for existing rest servlet. Rest servlet still exist to trying on old bound HBaseAdmin until a long time duration later with an Unavailable caught via an IOException caught in such as RootResource. Could we pairwise the HBase service with HBase rest service with some start/stop options? since seems no reason to still keep the rest servlet process after HBase stopped? When HBase restarts, original rest service could not resume to bind to the new HBase service via its old HBaseAdmin reference? So may we stop the rest when hbase stopped, or even if hbase was killed by acident, restart hbase with rest option could detect the old rest process, kill it and start to bind a new one? From this point of view, application rely on rest api in previous scenario could immediately detect it when setting up http connection session instead of wasting a long time to fail back from IOException with Unavailable from rest servlet. Put current options from the discussion history here from Andrew, Stack and Jean-Daniel, 1) create an HBaseAdmin on demand in rest servlet instead of keeping singleton instance; (another possible enhancement for HBase client: automatic reconnection of an open HBaseAdmin handle after a cluster bounce?) 2) pairwise the rest webapp with hbase webui so the rest is always on with HBase serive; 3) add an option for rest service (such as HBASE_MANAGES_REST) in hbase-env.sh, set HBASE_MANAGES_REST to true, the scripts will start/stop the REST server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-8028) Append, Increment: Adding rollback support
[ https://issues.apache.org/jira/browse/HBASE-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8028: - Fix Version/s: (was: 0.99.0) 0.99.1 Append, Increment: Adding rollback support -- Key: HBASE-8028 URL: https://issues.apache.org/jira/browse/HBASE-8028 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.5 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Fix For: 0.99.1 Attachments: HBase-8028-v1.patch, HBase-8028-v2.patch, HBase-8028-with-Increments-v1.patch, HBase-8028-with-Increments-v2.patch In case there is an exception while doing the log-sync, the memstore is not rollbacked, while the mvcc is _always_ forwarded to the writeentry created at the beginning of the operation. This may lead to scanners seeing results which are not synched to the fs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8028) Append, Increment: Adding rollback support
[ https://issues.apache.org/jira/browse/HBASE-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126681#comment-14126681 ] Enis Soztutar commented on HBASE-8028: -- Ran into this in triage. Seems important. Append, Increment: Adding rollback support -- Key: HBASE-8028 URL: https://issues.apache.org/jira/browse/HBASE-8028 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.5 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Fix For: 0.99.1 Attachments: HBase-8028-v1.patch, HBase-8028-v2.patch, HBase-8028-with-Increments-v1.patch, HBase-8028-with-Increments-v2.patch In case there is an exception while doing the log-sync, the memstore is not rollbacked, while the mvcc is _always_ forwarded to the writeentry created at the beginning of the operation. This may lead to scanners seeing results which are not synched to the fs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10919) [VisibilityController] ScanLabelGenerator using LDAP
[ https://issues.apache.org/jira/browse/HBASE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10919: -- Fix Version/s: (was: 0.99.0) 0.99.1 [VisibilityController] ScanLabelGenerator using LDAP Key: HBASE-10919 URL: https://issues.apache.org/jira/browse/HBASE-10919 Project: HBase Issue Type: New Feature Reporter: Andrew Purtell Fix For: 0.98.7, 0.99.1 Attachments: slides-10919.pdf A ScanLabelGenerator that queries an external service, using the LDAP protocol, for a set of attributes corresponding to the principal represented by the request UGI, and converts any returned in the response to additional auths in the effective set. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-9005) Improve documentation around KEEP_DELETED_CELLS, time range scans, and delete markers
[ https://issues.apache.org/jira/browse/HBASE-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9005: - Fix Version/s: (was: 0.99.0) 0.99.1 Improve documentation around KEEP_DELETED_CELLS, time range scans, and delete markers - Key: HBASE-9005 URL: https://issues.apache.org/jira/browse/HBASE-9005 Project: HBase Issue Type: Bug Components: documentation Reporter: Lars Hofhansl Assignee: Jonathan Hsieh Priority: Minor Fix For: 0.99.1 Attachments: 9005.txt, HBASE-9005-1.patch Without KEEP_DELETED_CELLS all timerange queries are broken if their range covers a delete marker. As some internal discussions with colleagues showed, this feature is not well understand and documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10933: -- Fix Version/s: (was: 0.99.0) 2.0.0 hbck -fixHdfsOrphans is not working properly it throws null pointer exception - Key: HBASE-10933 URL: https://issues.apache.org/jira/browse/HBASE-10933 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.94.16, 0.98.2 Reporter: Deepak Sharma Assignee: Kashif J S Priority: Critical Fix For: 2.0.0 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, TestResults-trunk.txt if we regioninfo file is not existing in hbase region then if we run hbck repair or hbck -fixHdfsOrphans then it is not able to resolve this problem it throws null pointer exception {code} 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs dir: hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 java.lang.NullPointerException at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) at junit.framework.TestCase.runBare(TestCase.java:132) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) {code} problem i got it is because since in HbaseFsck class {code} private void adoptHdfsOrphan(HbckInfo hi) {code} we are intializing tableinfo using SortedMapString, TableInfo tablesInfo object {code} TableInfo tableInfo = tablesInfo.get(tableName); {code} but in private SortedMapString, TableInfo loadHdfsRegionInfos() {code} for (HbckInfo hbi: hbckInfos) { if (hbi.getHdfsHRI() == null) { // was an orphan continue; } {code} we have check if a region is orphan then that table will can not be added in SortedMapString, TableInfo tablesInfo so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10800) Use CellComparator instead of KVComparator
[ https://issues.apache.org/jira/browse/HBASE-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10800: -- Fix Version/s: (was: 0.99.0) 0.99.1 Use CellComparator instead of KVComparator -- Key: HBASE-10800 URL: https://issues.apache.org/jira/browse/HBASE-10800 Project: HBase Issue Type: Sub-task Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.99.1 Attachments: HBASE-10800_1.patch, HBASE-10800_2.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10944) Remove all kv.getBuffer() and kv.getRow() references existing in the code
[ https://issues.apache.org/jira/browse/HBASE-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10944: -- Fix Version/s: (was: 0.99.0) 0.99.1 Remove all kv.getBuffer() and kv.getRow() references existing in the code - Key: HBASE-10944 URL: https://issues.apache.org/jira/browse/HBASE-10944 Project: HBase Issue Type: Sub-task Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.99.1 kv.getRow() and kv.getBuffers() are still used in places to form key byte[] and row byte[]. Removing all such instances including testcases will make the usage of Cell complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10974) Improve DBEs read performance by avoiding byte array deep copies for key[] and value[]
[ https://issues.apache.org/jira/browse/HBASE-10974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10974: -- Fix Version/s: (was: 0.99.0) 0.99.1 Improve DBEs read performance by avoiding byte array deep copies for key[] and value[] -- Key: HBASE-10974 URL: https://issues.apache.org/jira/browse/HBASE-10974 Project: HBase Issue Type: Improvement Components: Scanners Affects Versions: 0.99.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.99.1 Attachments: HBASE-10974_1.patch As part of HBASE-10801, we tried to reduce the copy of the value [] in forming the KV from the DBEs. The keys required copying and this was restricting us in using Cells and always wanted to copy to be done. The idea here is to replace the key byte[] as ByteBuffer and create a consecutive stream of the keys (currently the same byte[] is used and hence the copy). Use offset and length to track this key bytebuffer. The copy of the encoded format to normal Key format is definitely needed and can't be avoided but we could always avoid the deep copy of the bytes to form a KV and thus use cells effectively. Working on a patch, will post it soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7320) Remove KeyValue.getBuffer()
[ https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7320: - Fix Version/s: (was: 0.99.0) 0.99.1 Remove KeyValue.getBuffer() --- Key: HBASE-7320 URL: https://issues.apache.org/jira/browse/HBASE-7320 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: stack Fix For: 0.99.1 Attachments: 7320-simple.txt In many places this is simple task of just replacing the method name. There, however, quite a few places where we assume that: # the entire KV is backed by a single byte array # the KVs key portion is backed by a single byte array Some of those can easily be fixed, others will need their own jiras. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8028) Append, Increment: Adding rollback support
[ https://issues.apache.org/jira/browse/HBASE-8028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126683#comment-14126683 ] Hadoop QA commented on HBASE-8028: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576251/HBase-8028-with-Increments-v2.patch against trunk revision . ATTACHMENT ID: 12576251 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 5 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10787//console This message is automatically generated. Append, Increment: Adding rollback support -- Key: HBASE-8028 URL: https://issues.apache.org/jira/browse/HBASE-8028 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.5 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Fix For: 0.99.1 Attachments: HBase-8028-v1.patch, HBase-8028-v2.patch, HBase-8028-with-Increments-v1.patch, HBase-8028-with-Increments-v2.patch In case there is an exception while doing the log-sync, the memstore is not rollbacked, while the mvcc is _always_ forwarded to the writeentry created at the beginning of the operation. This may lead to scanners seeing results which are not synched to the fs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8533) HBaseAdmin does not ride over cluster restart
[ https://issues.apache.org/jira/browse/HBASE-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126684#comment-14126684 ] Hadoop QA commented on HBASE-8533: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12584868/8533-0.95-v1.patch against trunk revision . ATTACHMENT ID: 12584868 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10784//console This message is automatically generated. HBaseAdmin does not ride over cluster restart - Key: HBASE-8533 URL: https://issues.apache.org/jira/browse/HBASE-8533 Project: HBase Issue Type: Improvement Components: Client Affects Versions: 0.98.0, 0.95.0 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.99.1 Attachments: 8533-0.95-v1.patch, 8533-trunk-v1.patch, hbase-8533-trunk-v0.patch For Restful servlet (org.apache.hadoop.hbase.rest.Main (0.94), org.apache.hadoop.hbase.rest.RESTServer (trunk)) on Jetty, we need to first explicitly start the service (% ./bin/hbase-daemon.sh start rest -p 8000 ) for application running. Here is a scenario, sometimes, HBase cluster are stopped/started for maintanence, but rest is a seperated standalone process, which binds the HBaseAdmin at construction method. HBase stop/start cause this binding lost for existing rest servlet. Rest servlet still exist to trying on old bound HBaseAdmin until a long time duration later with an Unavailable caught via an IOException caught in such as RootResource. Could we pairwise the HBase service with HBase rest service with some start/stop options? since seems no reason to still keep the rest servlet process after HBase stopped? When HBase restarts, original rest service could not resume to bind to the new HBase service via its old HBaseAdmin reference? So may we stop the rest when hbase stopped, or even if hbase was killed by acident, restart hbase with rest option could detect the old rest process, kill it and start to bind a new one? From this point of view, application rely on rest api in previous scenario could immediately detect it when setting up http connection session instead of wasting a long time to fail back from IOException with Unavailable from rest servlet. Put current options from the discussion history here from Andrew, Stack and Jean-Daniel, 1) create an HBaseAdmin on demand in rest servlet instead of keeping singleton instance; (another possible enhancement for HBase client: automatic reconnection of an open HBaseAdmin handle after a cluster bounce?) 2) pairwise the rest webapp with hbase webui so the rest is always on with HBase serive; 3) add an option for rest service (such as HBASE_MANAGES_REST) in hbase-env.sh, set HBASE_MANAGES_REST to true, the scripts will start/stop the REST server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
[ https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126685#comment-14126685 ] Hadoop QA commented on HBASE-9003: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12642563/HBASE-9003.v2.patch against trunk revision . ATTACHMENT ID: 12642563 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 10 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10785//console This message is automatically generated. TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar - Key: HBASE-9003 URL: https://issues.apache.org/jira/browse/HBASE-9003 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Fix For: 2.0.0, 0.99.1 Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, HBASE-9003.v2.patch This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. However {{getJar()}} uses File.createTempFile() to create a temporary file under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar and its content is not purged after the JVM is destroyed. Since most configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar files get purged by {{tmpwatch}} or a similar tool, but boxes that have {{hadoop.tmp.dir}} pointing to a different location not monitored by {{tmpwatch}} will pile up a collection of jars causing all kind of issues. Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] comment on HADOOP-9737) we shouldn't use that as part of {{TableMapReduceUtil}} in order to avoid this kind of issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-10646) Enable security features by default for 1.0
[ https://issues.apache.org/jira/browse/HBASE-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10646: -- Fix Version/s: 0.99.1 Enable security features by default for 1.0 --- Key: HBASE-10646 URL: https://issues.apache.org/jira/browse/HBASE-10646 Project: HBase Issue Type: Task Affects Versions: 0.99.0 Reporter: Andrew Purtell Fix For: 0.99.1 As discussed in the last PMC meeting, we should enable security features by default in 1.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11127) Move security features into core
[ https://issues.apache.org/jira/browse/HBASE-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-11127: -- Fix Version/s: 0.99.1 Move security features into core Key: HBASE-11127 URL: https://issues.apache.org/jira/browse/HBASE-11127 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Fix For: 0.99.1 HBASE-11126 mentions concurrency issues we are running into as the security code increases in sophistication, due to current placement of coprocessor hooks, and proposes a solution to those issues with the expectation that security code remains outside of core in coprocessors. However, as an alternative we could consider moving all AccessController and VisibilityController related code into core. Worth discussing? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11805) KeyValue to Cell Convert in WALEdit APIs
[ https://issues.apache.org/jira/browse/HBASE-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126692#comment-14126692 ] Lars Hofhansl commented on HBASE-11805: --- Looks like this potentially breaks Phoenix (and maybe other things) with exceptions like these: {code} java.lang.NullPointerException at org.apache.hadoop.hbase.util.Bytes.toShort(Bytes.java:845) at org.apache.hadoop.hbase.util.Bytes.toShort(Bytes.java:832) at org.apache.hadoop.hbase.KeyValue.getRowLength(KeyValue.java:1303) at org.apache.hadoop.hbase.KeyValue.getFamilyOffset(KeyValue.java:1319) at org.apache.hadoop.hbase.KeyValue.getFamilyLength(KeyValue.java:1334) at org.apache.hadoop.hbase.CellUtil.cloneFamily(CellUtil.java:70) at org.apache.hadoop.hbase.KeyValue.getFamily(KeyValue.java:1559) at org.apache.hadoop.hbase.replication.regionserver.Replication.scopeWAL Edits(Replication.java:249) at org.apache.hadoop.hbase.replication.regionserver.Replication.visitLog EntryBeforeWrite(Replication.java:233) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doWrite(FSHLog.java:1486) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:1023) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.appendNoSync(FSHLog.java:1054) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2553) {code} KeyValue to Cell Convert in WALEdit APIs Key: HBASE-11805 URL: https://issues.apache.org/jira/browse/HBASE-11805 Project: HBase Issue Type: Improvement Components: wal Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: HBASE-11805.patch, HBASE-11805_0.98.patch, HBASE-11805_0.98_V2.patch, HBASE-11805_0.99.patch, HBASE-11805_V2.patch, HBASE-11805_V3.patch In almost all other main interface class/APIs we have changed KeyValue to Cell. But missing in WALEdit. This is public marked for Replication (Well it should be for CP also) These 2 APIs deal with KVs add(KeyValue kv) ArrayListKeyValue getKeyValues() Suggest deprecate them and add for 0.98 add(Cell kv) ListCell getCells() And just replace from 1.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11805) KeyValue to Cell Convert in WALEdit APIs
[ https://issues.apache.org/jira/browse/HBASE-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126701#comment-14126701 ] Lars Hofhansl commented on HBASE-11805: --- Tried with 0.98 current build compared to a build of the 0.98.6RC2 tag. The former build shows this, the latter does not. [~anoop.hbase], [~apurtell], FYI. KeyValue to Cell Convert in WALEdit APIs Key: HBASE-11805 URL: https://issues.apache.org/jira/browse/HBASE-11805 Project: HBase Issue Type: Improvement Components: wal Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: HBASE-11805.patch, HBASE-11805_0.98.patch, HBASE-11805_0.98_V2.patch, HBASE-11805_0.99.patch, HBASE-11805_V2.patch, HBASE-11805_V3.patch In almost all other main interface class/APIs we have changed KeyValue to Cell. But missing in WALEdit. This is public marked for Replication (Well it should be for CP also) These 2 APIs deal with KVs add(KeyValue kv) ArrayListKeyValue getKeyValues() Suggest deprecate them and add for 0.98 add(Cell kv) ListCell getCells() And just replace from 1.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11445) TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky
[ https://issues.apache.org/jira/browse/HBASE-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126706#comment-14126706 ] Hudson commented on HBASE-11445: SUCCESS: Integrated in HBase-1.0 #163 (See [https://builds.apache.org/job/HBase-1.0/163/]) HBASE-11445 TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky (Jeffrey Zhong) (enis: rev f509f61a40952cd2185b8c35bdac32ca7d67b7d6) * hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky -- Key: HBASE-11445 URL: https://issues.apache.org/jira/browse/HBASE-11445 Project: HBase Issue Type: Bug Components: snapshots Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: failure.txt, hbase-11445.patch Recently there is a failure from Jenkins build:https://builds.apache.org/job/HBase-0.98/364/testReport/junit/org.apache.hadoop.hbase.procedure/TestZKProcedure/testMultiCohortWithMemberTimeoutDuringPrepare/. Below are related log message and Member: 'one' joining twice: {noformat} 2014-06-29 19:26:34,101 DEBUG [member: 'three' subprocedure-pool11-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.Subprocedure(162): Subprocedure 'op' locally acquired 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-11919) Remove the deprecated pre/postGet CP hook
Anoop Sam John created HBASE-11919: -- Summary: Remove the deprecated pre/postGet CP hook Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 These were deprecated from 0.96. We have 0.98 one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-11919: --- Description: These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} was: These were deprecated from 0.96. We have 0.98 one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-11919: --- Attachment: HBASE-11919.patch Simple patch just removing some code. [~enis] can we get this into 0.99.0 itself? Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-11919: --- Status: Patch Available (was: Open) Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11832) maven release plugin overrides command line arguments
[ https://issues.apache.org/jira/browse/HBASE-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126721#comment-14126721 ] Hudson commented on HBASE-11832: FAILURE: Integrated in HBase-TRUNK #5480 (See [https://builds.apache.org/job/HBase-TRUNK/5480/]) HBASE-11832 maven release plugin overrides command line arguments (Enoch Hsu) (enis: rev 72e664f54070374efc481ff65f6182a96d1da4b1) * pom.xml maven release plugin overrides command line arguments - Key: HBASE-11832 URL: https://issues.apache.org/jira/browse/HBASE-11832 Project: HBase Issue Type: Bug Affects Versions: 0.98.4 Reporter: Enoch Hsu Assignee: Enoch Hsu Priority: Minor Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11832.patch Inside the pom under the maven-release-plugin there is a configuration that defines what the release-plugin uses like so configuration !--You need this profile. It'll sign your artifacts. I'm not sure if this config. actually works though. I've been specifying -Papache-release on the command-line -- releaseProfilesapache-release/releaseProfiles !--This stops our running tests for each stage of maven release. But it builds the test jar. From SUREFIRE-172. -- arguments-Dmaven.test.skip.exec/arguments pomFileNamepom.xml/pomFileName /configuration The arguments are hardcoded in and will automatically override any arguments the user passes in from the command line. I propose to modify this to the following arguments-Dmaven.test.skip.exec ${arguments}/arguments -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-9473) Change UI to list 'system tables' rather than 'catalog tables'.
[ https://issues.apache.org/jira/browse/HBASE-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126720#comment-14126720 ] Hudson commented on HBASE-9473: --- FAILURE: Integrated in HBase-TRUNK #5480 (See [https://builds.apache.org/job/HBase-TRUNK/5480/]) HBASE-9473 Change UI to list 'system tables' rather than 'catalog tables' (Stack) (enis: rev 71e6ff437792026f57178bf84d8929902633fe06) * hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon Change UI to list 'system tables' rather than 'catalog tables'. --- Key: HBASE-9473 URL: https://issues.apache.org/jira/browse/HBASE-9473 Project: HBase Issue Type: Bug Components: UI Reporter: stack Assignee: stack Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 9473.txt Minor, one-line, bit of polishing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11692) Document how and why to do a manual region split
[ https://issues.apache.org/jira/browse/HBASE-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126725#comment-14126725 ] Hadoop QA commented on HBASE-11692: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12667358/HBASE-11692.patch against trunk revision . ATTACHMENT ID: 12667358 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+0 tests included{color}. The patch appears to be a documentation patch that doesn't require tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + screenhbase create 'test_table', 'f1', SPLITS= ['a', 'e', 'i', 'o', 'u']/screen +screenhbase org.apache.hadoop.hbase.util.RegionSplitter test_table HexStringSplit -c 10 -f f1/screen + xlink:href=http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.SplitAlgorithm.html; + xlink:href=http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html; + xlink:href=http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html; {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestRegionRebalancing Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10783//console This message is automatically generated. Document how and why to do a manual region split Key: HBASE-11692 URL: https://issues.apache.org/jira/browse/HBASE-11692 Project: HBase Issue Type: Task Components: documentation Reporter: Misty Stanley-Jones Assignee: Misty Stanley-Jones Attachments: HBASE-11692.patch {quote} -- Forwarded message -- From: Liu, Ming (HPIT-GADSC) ming.l...@hp.com Date: Tue, Aug 5, 2014 at 11:28 PM Subject: Why hbase need manual split? To: u...@hbase.apache.org u...@hbase.apache.org Hi, all, As I understand, HBase will automatically split a region when the region is too big. So in what scenario, user needs to do a manual split? Could someone kindly give me some examples that user need to do the region split explicitly via HBase Shell or Java API? Thanks very much. Regards, Ming {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11805) KeyValue to Cell Convert in WALEdit APIs
[ https://issues.apache.org/jira/browse/HBASE-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126727#comment-14126727 ] Anoop Sam John commented on HBASE-11805: Checking this [~lhofhansl] Looks strange that in a KeyValue object we have a null ref to bytes array! KeyValue to Cell Convert in WALEdit APIs Key: HBASE-11805 URL: https://issues.apache.org/jira/browse/HBASE-11805 Project: HBase Issue Type: Improvement Components: wal Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: HBASE-11805.patch, HBASE-11805_0.98.patch, HBASE-11805_0.98_V2.patch, HBASE-11805_0.99.patch, HBASE-11805_V2.patch, HBASE-11805_V3.patch In almost all other main interface class/APIs we have changed KeyValue to Cell. But missing in WALEdit. This is public marked for Replication (Well it should be for CP also) These 2 APIs deal with KVs add(KeyValue kv) ArrayListKeyValue getKeyValues() Suggest deprecate them and add for 0.98 add(Cell kv) ListCell getCells() And just replace from 1.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11805) KeyValue to Cell Convert in WALEdit APIs
[ https://issues.apache.org/jira/browse/HBASE-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126739#comment-14126739 ] Anoop Sam John commented on HBASE-11805: Reading the Phoenix code, I think I got the issue. {code} WALEdit edit = miniBatchOp.getWalEdit(i); // we don't have a WALEdit for immutable index cases, which still see this path // we could check is indexing is enable for the mutation in prePut and then just skip this // after checking here, but this saves us the checking again. if (edit != null) { KeyValue kv = edit.getKeyValues().get(0); if (kv == BATCH_MARKER) { // remove batch marker from the WALEdit edit.getKeyValues().remove(0); } } {code} BATCH_MARKER is a KV with null bytes. This removal wont happen now as we return a new ArrayList from WALEdit#getKeyValues() Thinking abt how we can solve. KeyValue to Cell Convert in WALEdit APIs Key: HBASE-11805 URL: https://issues.apache.org/jira/browse/HBASE-11805 Project: HBase Issue Type: Improvement Components: wal Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: HBASE-11805.patch, HBASE-11805_0.98.patch, HBASE-11805_0.98_V2.patch, HBASE-11805_0.99.patch, HBASE-11805_V2.patch, HBASE-11805_V3.patch In almost all other main interface class/APIs we have changed KeyValue to Cell. But missing in WALEdit. This is public marked for Replication (Well it should be for CP also) These 2 APIs deal with KVs add(KeyValue kv) ArrayListKeyValue getKeyValues() Suggest deprecate them and add for 0.98 add(Cell kv) ListCell getCells() And just replace from 1.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-9473) Change UI to list 'system tables' rather than 'catalog tables'.
[ https://issues.apache.org/jira/browse/HBASE-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126752#comment-14126752 ] Hudson commented on HBASE-9473: --- FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #478 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/478/]) HBASE-9473 Change UI to list 'system tables' rather than 'catalog tables' (Stack) (enis: rev 2ed1f8f787b88a851e6050a46123d15911198850) * hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon Change UI to list 'system tables' rather than 'catalog tables'. --- Key: HBASE-9473 URL: https://issues.apache.org/jira/browse/HBASE-9473 Project: HBase Issue Type: Bug Components: UI Reporter: stack Assignee: stack Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 9473.txt Minor, one-line, bit of polishing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11445) TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky
[ https://issues.apache.org/jira/browse/HBASE-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126753#comment-14126753 ] Hudson commented on HBASE-11445: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #478 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/478/]) HBASE-11445 TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky (Jeffrey Zhong) (enis: rev 985949399fd6a0b118acbe6606845fa9735d8068) * hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky -- Key: HBASE-11445 URL: https://issues.apache.org/jira/browse/HBASE-11445 Project: HBase Issue Type: Bug Components: snapshots Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: failure.txt, hbase-11445.patch Recently there is a failure from Jenkins build:https://builds.apache.org/job/HBase-0.98/364/testReport/junit/org.apache.hadoop.hbase.procedure/TestZKProcedure/testMultiCohortWithMemberTimeoutDuringPrepare/. Below are related log message and Member: 'one' joining twice: {noformat} 2014-06-29 19:26:34,101 DEBUG [member: 'three' subprocedure-pool11-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.Subprocedure(162): Subprocedure 'op' locally acquired 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11679) Replace HTable with HTableInterface where backwards-compatible
[ https://issues.apache.org/jira/browse/HBASE-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126762#comment-14126762 ] Hudson commented on HBASE-11679: FAILURE: Integrated in HBase-TRUNK #5481 (See [https://builds.apache.org/job/HBase-TRUNK/5481/]) HBASE-11679 Replace HTable with HTableInterface where backwards-compatible (Carter) (enis: rev 4995ed8a029feb8ccac8054f56f23261a6918add) * hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java * hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java * hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java * hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java * hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHBaseAdminNoCluster.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotMetadata.java * hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java * hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java * hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java * hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java * hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java * hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpoint.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java * hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java * hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSizeCalculator.java * hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdaterWithACL.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java * hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessor.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java * hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java * hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestOpenTableInCoprocessor.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java * hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java * hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLWithMultipleVersions.java * hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java * hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java * hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java * hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java * hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java * hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java * hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java * hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java * hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java * hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java * hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/IncrementCoalescer.java *
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126775#comment-14126775 ] Hadoop QA commented on HBASE-11919: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12667364/HBASE-11919.patch against trunk revision . ATTACHMENT ID: 12667364 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.camel.component.mqtt.MQTTProducerTest.testProduce(MQTTProducerTest.java:64) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10788//console This message is automatically generated. Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126784#comment-14126784 ] Nicolas Liochon commented on HBASE-11919: - bq. org.apache.camel.component.mqtt.MQTTProducerTest.testProduce(MQTTProducerTest.java:64) funny :-) +1, but a release note is important for such changes imho. Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11832) maven release plugin overrides command line arguments
[ https://issues.apache.org/jira/browse/HBASE-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126783#comment-14126783 ] Hudson commented on HBASE-11832: SUCCESS: Integrated in HBase-1.0 #164 (See [https://builds.apache.org/job/HBase-1.0/164/]) HBASE-11832 maven release plugin overrides command line arguments (Enoch Hsu) (enis: rev 32a8bb44a3d7e360c14d032d2dc0b396aa2c3582) * pom.xml maven release plugin overrides command line arguments - Key: HBASE-11832 URL: https://issues.apache.org/jira/browse/HBASE-11832 Project: HBase Issue Type: Bug Affects Versions: 0.98.4 Reporter: Enoch Hsu Assignee: Enoch Hsu Priority: Minor Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11832.patch Inside the pom under the maven-release-plugin there is a configuration that defines what the release-plugin uses like so configuration !--You need this profile. It'll sign your artifacts. I'm not sure if this config. actually works though. I've been specifying -Papache-release on the command-line -- releaseProfilesapache-release/releaseProfiles !--This stops our running tests for each stage of maven release. But it builds the test jar. From SUREFIRE-172. -- arguments-Dmaven.test.skip.exec/arguments pomFileNamepom.xml/pomFileName /configuration The arguments are hardcoded in and will automatically override any arguments the user passes in from the command line. I propose to modify this to the following arguments-Dmaven.test.skip.exec ${arguments}/arguments -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-9473) Change UI to list 'system tables' rather than 'catalog tables'.
[ https://issues.apache.org/jira/browse/HBASE-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126782#comment-14126782 ] Hudson commented on HBASE-9473: --- SUCCESS: Integrated in HBase-1.0 #164 (See [https://builds.apache.org/job/HBase-1.0/164/]) HBASE-9473 Change UI to list 'system tables' rather than 'catalog tables' (Stack) (enis: rev 180d9df216489db519e85b53eacdb26885f0df0d) * hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon Change UI to list 'system tables' rather than 'catalog tables'. --- Key: HBASE-9473 URL: https://issues.apache.org/jira/browse/HBASE-9473 Project: HBase Issue Type: Bug Components: UI Reporter: stack Assignee: stack Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 9473.txt Minor, one-line, bit of polishing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126786#comment-14126786 ] Anoop Sam John commented on HBASE-11919: Sure I will add release notes while committing. Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11647) MOB integration testing
[ https://issues.apache.org/jira/browse/HBASE-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingcheng Du updated HBASE-11647: - Attachment: HBASE-11647-without-sweep-tool-addendum.diff Update the patch based on the patch HBASE-11647-without-sweep-tool-V3.diff. 1. Use the threshold as a long type in the HColumnDescriptor. 2. Pass a string instead of bytes between IntegrationTestIngestWithMOB and LoadTestDataGeneratorWithMOB. These modifications fix issues in the patch HBASE-11647-without-sweep-tool-V3.diff, the issues are listed below. 1. Always use the default threshold. 2. Wrong column family name in the testing. MOB integration testing --- Key: HBASE-11647 URL: https://issues.apache.org/jira/browse/HBASE-11647 Project: HBase Issue Type: Sub-task Components: Performance, test Reporter: Jingcheng Du Assignee: Jingcheng Du Fix For: hbase-11339 Attachments: HBASE-11647-without-sweep-tool-V2.diff, HBASE-11647-without-sweep-tool-V3.diff, HBASE-11647-without-sweep-tool-addendum.diff, HBASE-11647-without-sweep-tool.diff, HBASE-11647.diff The integration testings include the integration function testing and performance testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11647) MOB integration testing
[ https://issues.apache.org/jira/browse/HBASE-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126852#comment-14126852 ] Anoop Sam John commented on HBASE-11647: +1 for this addendum. Will commit now. MOB integration testing --- Key: HBASE-11647 URL: https://issues.apache.org/jira/browse/HBASE-11647 Project: HBase Issue Type: Sub-task Components: Performance, test Reporter: Jingcheng Du Assignee: Jingcheng Du Fix For: hbase-11339 Attachments: HBASE-11647-without-sweep-tool-V2.diff, HBASE-11647-without-sweep-tool-V3.diff, HBASE-11647-without-sweep-tool-addendum.diff, HBASE-11647-without-sweep-tool.diff, HBASE-11647.diff The integration testings include the integration function testing and performance testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11647) MOB integration testing
[ https://issues.apache.org/jira/browse/HBASE-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126861#comment-14126861 ] Anoop Sam John commented on HBASE-11647: Thanks for the addendum Jingcheng. Pushed to the branch. MOB integration testing --- Key: HBASE-11647 URL: https://issues.apache.org/jira/browse/HBASE-11647 Project: HBase Issue Type: Sub-task Components: Performance, test Reporter: Jingcheng Du Assignee: Jingcheng Du Fix For: hbase-11339 Attachments: HBASE-11647-without-sweep-tool-V2.diff, HBASE-11647-without-sweep-tool-V3.diff, HBASE-11647-without-sweep-tool-addendum.diff, HBASE-11647-without-sweep-tool.diff, HBASE-11647.diff The integration testings include the integration function testing and performance testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11445) TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky
[ https://issues.apache.org/jira/browse/HBASE-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126872#comment-14126872 ] Hudson commented on HBASE-11445: FAILURE: Integrated in HBase-0.98 #505 (See [https://builds.apache.org/job/HBase-0.98/505/]) HBASE-11445 TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky (Jeffrey Zhong) (enis: rev 985949399fd6a0b118acbe6606845fa9735d8068) * hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky -- Key: HBASE-11445 URL: https://issues.apache.org/jira/browse/HBASE-11445 Project: HBase Issue Type: Bug Components: snapshots Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: failure.txt, hbase-11445.patch Recently there is a failure from Jenkins build:https://builds.apache.org/job/HBase-0.98/364/testReport/junit/org.apache.hadoop.hbase.procedure/TestZKProcedure/testMultiCohortWithMemberTimeoutDuringPrepare/. Below are related log message and Member: 'one' joining twice: {noformat} 2014-06-29 19:26:34,101 DEBUG [member: 'three' subprocedure-pool11-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.Subprocedure(162): Subprocedure 'op' locally acquired 2014-06-29 19:26:34,101 DEBUG [member: 'one' subprocedure-pool9-thread-1] procedure.ZKProcedureMemberRpcs(237): Member: 'one' joining acquired barrier for procedure (op) in zk {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-9473) Change UI to list 'system tables' rather than 'catalog tables'.
[ https://issues.apache.org/jira/browse/HBASE-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126871#comment-14126871 ] Hudson commented on HBASE-9473: --- FAILURE: Integrated in HBase-0.98 #505 (See [https://builds.apache.org/job/HBase-0.98/505/]) HBASE-9473 Change UI to list 'system tables' rather than 'catalog tables' (Stack) (enis: rev 2ed1f8f787b88a851e6050a46123d15911198850) * hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon Change UI to list 'system tables' rather than 'catalog tables'. --- Key: HBASE-9473 URL: https://issues.apache.org/jira/browse/HBASE-9473 Project: HBase Issue Type: Bug Components: UI Reporter: stack Assignee: stack Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 9473.txt Minor, one-line, bit of polishing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126898#comment-14126898 ] Jean-Marc Spaggiari commented on HBASE-11919: - Is there anything to reaplce that? Or we will not be able to hook of gets anymore? Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126906#comment-14126906 ] Anoop Sam John commented on HBASE-11919: There is hooks preGetOp() and postGetOp() which were added since 0.96 as replacement for these. So no worries. :) Only diff is the old hooks were having ListKeyValue as param but the above 2 have ListCell Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-11145) Issue with HLog sync
[ https://issues.apache.org/jira/browse/HBASE-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126548#comment-14126548 ] Anoop Sam John edited comment on HBASE-11145 at 9/9/14 2:21 PM: It is ok to move this for 0.99.1 was (Author: anoop.hbase): It is ok to move this for 0.99.1 I just came back to this last week. Trying to work out some other options so that HBASE-10713 I can continue. Will update back in 2 days. JFYI [~saint@gmail.com] Issue with HLog sync Key: HBASE-11145 URL: https://issues.apache.org/jira/browse/HBASE-11145 Project: HBase Issue Type: Bug Reporter: Anoop Sam John Assignee: stack Priority: Critical Fix For: 0.99.1 Attachments: 11145.txt Got the below Exceptions Log in case of a write heavy test {code} 2014-05-07 11:29:56,417 ERROR [main.append-pool1-t1] wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!! java.lang.IllegalStateException: Queue full at java.util.AbstractQueue.add(Unknown Source) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.offer(FSHLog.java:1227) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1878) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2014-05-07 11:29:56,418 ERROR [main.append-pool1-t1] wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!! java.lang.ArrayIndexOutOfBoundsException: 5 at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2014-05-07 11:29:56,419 ERROR [main.append-pool1-t1] wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!! java.lang.ArrayIndexOutOfBoundsException: 6 at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 2014-05-07 11:29:56,419 ERROR [main.append-pool1-t1] wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!! java.lang.ArrayIndexOutOfBoundsException: 7 at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) {code} In FSHLog$SyncRunner.offer we do BlockingQueue.add() which throws Exception as it is full. The problem is the below shown catch() we do not do any cleanup. {code} this.syncRunners[index].offer(sequence, this.syncFutures, this.syncFuturesCount); attainSafePoint(sequence); this.syncFuturesCount = 0; } catch (Throwable t) { LOG.error(UNEXPECTED!!!, t); } {code} syncFuturesCount is not getting reset to 0 and so the subsequent onEvent() handling throws ArrayIndexOutOfBoundsException. I think we should do the below 1. Handle the Exception and call cleanupOutstandingSyncsOnException() as in other cases of Exception handling 2. Instead of BlockingQueue.add() use offer() (?) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11401) Late-binding sequenceid presumes a particular KeyValue mvcc format hampering experiment
[ https://issues.apache.org/jira/browse/HBASE-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127029#comment-14127029 ] Anoop Sam John commented on HBASE-11401: I just came back to this last week. Trying to work out some other options so that HBASE-10713 I can continue. Will update back in 2 days. JFYI [~saint@gmail.com] Late-binding sequenceid presumes a particular KeyValue mvcc format hampering experiment --- Key: HBASE-11401 URL: https://issues.apache.org/jira/browse/HBASE-11401 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Priority: Critical Fix For: 0.99.1 Attachments: 11401.changing.order.txt, memstore.txt, nopatch.traces.svg, wpatch.traces.svg After HBASE-8763, we have combined KV mvcc and HLog seqNo. This is implemented in a tricky way now. In HRegion on write path, we first write to memstore and then write to HLog finally sync log. So at the time of write to memstore we dont know the WAL seqNo. To overcome this, we hold the ref to the KV objects just added to memstore and pass those also to write to wal call. Once the seqNo is obtained, we will reset the mvcc is those KVs with this seqNo. (While write to memstore we wrote kvs with a very high temp value for mvcc so that concurrent readers wont see them) This model works well with the DefaultMemstore. During the write there wont be any concurrent call to snapshot(). But now we have memstore as a pluggable interface. The above model of late binding assumes that the memstore internal datastructure continue to refer to same java objects. This might not be true always. Like in HBASE-10713, in btw the kvs can be converted into a CellBlock. If we discontinue to refer to same KV java objects, we will fail in getting the seqNo assigned as kv mvcc. If we were doing write and sync to wal and then write to memstore, this would have get solved. But this model we changed (in 94 I believe) for better perf. Under HRegion level lock, we write to memstore and then to wal. Finally out of lock we do the the log sync. So we can not change it now I tried changing the order of ops within the lock (ie. write to log and then to memstore) so that we can get the seqNo when write to memstore. But because of the new HLog write model, we are not guarenteed to get the write to done immediately. One possible way can be add a new API in Log level, to get a next seqNo alone. Call this first and then using which write to memstore and then to wal (using this seqNo). Just a random thought. Not tried. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11918) TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized
[ https://issues.apache.org/jira/browse/HBASE-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-11918: --- Fix Version/s: 0.98.7 2.0.0 0.99.0 TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized --- Key: HBASE-11918 URL: https://issues.apache.org/jira/browse/HBASE-11918 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 11918-v1.txt, 11918-v1.txt Here is one example: https://builds.apache.org/job/hbase-0.98/lastCompletedBuild/testReport/org.apache.hadoop.hbase.security.visibility/TestVisibilityLabelsWithDistributedLogReplay/testAddVisibilityLabelsOnRSRestart/ {code} 2014-09-09 02:46:05,168 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: VisibilityController not yet initialized! at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:644) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) ... 2014-09-09 02:46:10,087 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.security.visibility.LabelAlreadyExistsException: Label 'secret' already exists at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:667) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) at java.lang.Thread.run(Thread.java:662) at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1460) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:126) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:118) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:268) at
[jira] [Commented] (HBASE-11918) TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized
[ https://issues.apache.org/jira/browse/HBASE-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127037#comment-14127037 ] Ted Yu commented on HBASE-11918: Ran the test on Linux 100 iterations - all passed. TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized --- Key: HBASE-11918 URL: https://issues.apache.org/jira/browse/HBASE-11918 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 11918-v1.txt, 11918-v1.txt Here is one example: https://builds.apache.org/job/hbase-0.98/lastCompletedBuild/testReport/org.apache.hadoop.hbase.security.visibility/TestVisibilityLabelsWithDistributedLogReplay/testAddVisibilityLabelsOnRSRestart/ {code} 2014-09-09 02:46:05,168 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: VisibilityController not yet initialized! at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:644) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) ... 2014-09-09 02:46:10,087 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.security.visibility.LabelAlreadyExistsException: Label 'secret' already exists at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:667) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) at java.lang.Thread.run(Thread.java:662) at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1460) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:126) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:118) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:268) at
[jira] [Updated] (HBASE-11918) TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized
[ https://issues.apache.org/jira/browse/HBASE-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-11918: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the review, Anoop. TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized --- Key: HBASE-11918 URL: https://issues.apache.org/jira/browse/HBASE-11918 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 11918-v1.txt, 11918-v1.txt Here is one example: https://builds.apache.org/job/hbase-0.98/lastCompletedBuild/testReport/org.apache.hadoop.hbase.security.visibility/TestVisibilityLabelsWithDistributedLogReplay/testAddVisibilityLabelsOnRSRestart/ {code} 2014-09-09 02:46:05,168 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: VisibilityController not yet initialized! at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:644) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) ... 2014-09-09 02:46:10,087 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.security.visibility.LabelAlreadyExistsException: Label 'secret' already exists at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:667) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) at java.lang.Thread.run(Thread.java:662) at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1460) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:126) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:118) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:268) at
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127072#comment-14127072 ] stack commented on HBASE-11919: --- Do we know who we will break with this change? Phoenix? Change looks good. Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11839) TestRegionRebalance is flakey
[ https://issues.apache.org/jira/browse/HBASE-11839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Soldatov updated HBASE-11839: Attachment: HBASE-11839-v1.patch Quite simple patch. The problem is that average load calculated using both online and dead servers. An example: 25 regions on 3 region servers with 9, 9, 7 regions correspondingly. 3rd is going down and its regions randomly copied to other regions, so, for example, it become as 15,10 and 3rd server which is down is still having 7. Current code calculated average as (15 + 10 + 7)/3 = 10.6 and this value is used during computation instead of 12.5 (25 regions and 2 online servers). Patch uses only online servers during average computation. Please correct me if I wrong. TestRegionRebalance is flakey - Key: HBASE-11839 URL: https://issues.apache.org/jira/browse/HBASE-11839 Project: HBase Issue Type: Bug Reporter: Alex Newman Assignee: Alex Newman Fix For: 2.0.0, 0.99.1 Attachments: HBASE-11839-v1.patch Besides failing many times on the prebuild TestRegionRebalance fails on my local machine eventually simply with export RUNNIN=true; mvn clean install -DskipTests ; while ($RUNNIN) ; do mvn test -Dtest=TestRegionRebalancing || RUNNIN=false;done -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11839) TestRegionRebalance is flakey
[ https://issues.apache.org/jira/browse/HBASE-11839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Soldatov updated HBASE-11839: Status: Patch Available (was: Open) TestRegionRebalance is flakey - Key: HBASE-11839 URL: https://issues.apache.org/jira/browse/HBASE-11839 Project: HBase Issue Type: Bug Reporter: Alex Newman Assignee: Sergey Soldatov Fix For: 2.0.0, 0.99.1 Attachments: HBASE-11839-v1.patch Besides failing many times on the prebuild TestRegionRebalance fails on my local machine eventually simply with export RUNNIN=true; mvn clean install -DskipTests ; while ($RUNNIN) ; do mvn test -Dtest=TestRegionRebalancing || RUNNIN=false;done -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-11839) TestRegionRebalance is flakey
[ https://issues.apache.org/jira/browse/HBASE-11839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Soldatov reassigned HBASE-11839: --- Assignee: Sergey Soldatov (was: Alex Newman) TestRegionRebalance is flakey - Key: HBASE-11839 URL: https://issues.apache.org/jira/browse/HBASE-11839 Project: HBase Issue Type: Bug Reporter: Alex Newman Assignee: Sergey Soldatov Fix For: 2.0.0, 0.99.1 Attachments: HBASE-11839-v1.patch Besides failing many times on the prebuild TestRegionRebalance fails on my local machine eventually simply with export RUNNIN=true; mvn clean install -DskipTests ; while ($RUNNIN) ; do mvn test -Dtest=TestRegionRebalancing || RUNNIN=false;done -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11839) TestRegionRebalance is flakey
[ https://issues.apache.org/jira/browse/HBASE-11839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-11839: -- Resolution: Fixed Fix Version/s: 0.98.7 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Makes sense. Committed to 0.98+. Thanks [~sergey.soldatov] TestRegionRebalance is flakey - Key: HBASE-11839 URL: https://issues.apache.org/jira/browse/HBASE-11839 Project: HBase Issue Type: Bug Reporter: Alex Newman Assignee: Sergey Soldatov Fix For: 2.0.0, 0.98.7, 0.99.1 Attachments: HBASE-11839-v1.patch Besides failing many times on the prebuild TestRegionRebalance fails on my local machine eventually simply with export RUNNIN=true; mvn clean install -DskipTests ; while ($RUNNIN) ; do mvn test -Dtest=TestRegionRebalancing || RUNNIN=false;done -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11918) TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized
[ https://issues.apache.org/jira/browse/HBASE-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127118#comment-14127118 ] Hudson commented on HBASE-11918: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #479 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/479/]) HBASE-11918 TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized (tedyu: rev 33c9e64ca3bb9da5a8137f99ed0bb1393f565921) * hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDefaultVisLabelService.java TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized --- Key: HBASE-11918 URL: https://issues.apache.org/jira/browse/HBASE-11918 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Priority: Minor Fix For: 0.99.0, 2.0.0, 0.98.7 Attachments: 11918-v1.txt, 11918-v1.txt Here is one example: https://builds.apache.org/job/hbase-0.98/lastCompletedBuild/testReport/org.apache.hadoop.hbase.security.visibility/TestVisibilityLabelsWithDistributedLogReplay/testAddVisibilityLabelsOnRSRestart/ {code} 2014-09-09 02:46:05,168 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: org.apache.hadoop.hbase.security.visibility.VisibilityControllerNotReadyException: VisibilityController not yet initialized! at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:644) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) ... 2014-09-09 02:46:10,087 DEBUG [Thread-245] visibility.TestVisibilityLabelsWithDefaultVisLabelService$2(127): Got exception writing labels org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.security.visibility.LabelAlreadyExistsException: Label 'secret' already exists at org.apache.hadoop.hbase.security.visibility.VisibilityController.addLabels(VisibilityController.java:667) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.addLabels(VisibilityLabelsProtos.java:5014) at org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5178) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5591) at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3396) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3378) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29591) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) at java.lang.Thread.run(Thread.java:662) at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1460) at org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService$2.run(TestVisibilityLabelsWithDefaultVisLabelService.java:126) at
[jira] [Updated] (HBASE-11862) Get rid of Writables in HTableDescriptor, HColumnDescriptor
[ https://issues.apache.org/jira/browse/HBASE-11862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Stepachev updated HBASE-11862: - Attachment: HBASE-11862.patch Updated patch. ImmutableBytes was removed. Now Bytes class can be used to wrap byte[] arrays. Get rid of Writables in HTableDescriptor, HColumnDescriptor --- Key: HBASE-11862 URL: https://issues.apache.org/jira/browse/HBASE-11862 Project: HBase Issue Type: Improvement Reporter: Andrey Stepachev Assignee: Andrey Stepachev Priority: Minor Labels: beginner Fix For: 2.0.0 Attachments: HBASE-11862.patch, HBASE-11862.patch Currently we have protobuf for encoding this structures. Existence of Writable is misleading and need to be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11862) Get rid of Writables in HTableDescriptor, HColumnDescriptor
[ https://issues.apache.org/jira/browse/HBASE-11862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Stepachev updated HBASE-11862: - Status: Open (was: Patch Available) Get rid of Writables in HTableDescriptor, HColumnDescriptor --- Key: HBASE-11862 URL: https://issues.apache.org/jira/browse/HBASE-11862 Project: HBase Issue Type: Improvement Reporter: Andrey Stepachev Assignee: Andrey Stepachev Priority: Minor Labels: beginner Fix For: 2.0.0 Attachments: HBASE-11862.patch, HBASE-11862.patch Currently we have protobuf for encoding this structures. Existence of Writable is misleading and need to be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11862) Get rid of Writables in HTableDescriptor, HColumnDescriptor
[ https://issues.apache.org/jira/browse/HBASE-11862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Stepachev updated HBASE-11862: - Status: Patch Available (was: Open) Get rid of Writables in HTableDescriptor, HColumnDescriptor --- Key: HBASE-11862 URL: https://issues.apache.org/jira/browse/HBASE-11862 Project: HBase Issue Type: Improvement Reporter: Andrey Stepachev Assignee: Andrey Stepachev Priority: Minor Labels: beginner Fix For: 2.0.0 Attachments: HBASE-11862.patch, HBASE-11862.patch Currently we have protobuf for encoding this structures. Existence of Writable is misleading and need to be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11760) Tighten up region state transition
[ https://issues.apache.org/jira/browse/HBASE-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-11760: Attachment: rsm-2.pdf Attached the graph/doc v2. Tighten up region state transition -- Key: HBASE-11760 URL: https://issues.apache.org/jira/browse/HBASE-11760 Project: HBase Issue Type: Improvement Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 2.0.0 Attachments: hbase-11760.patch, hbase-11760_2.1.patch, hbase-11760_2.patch, rsm-2.pdf, rsm.pdf, rsm.png When a regionserver reports to master a region transition, we should check the current region state to be exactly what we expect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127153#comment-14127153 ] Anoop Sam John commented on HBASE-11919: Checked phoenix 4.0 and master branch. No CP implements pre/postGet() hooks. So no break there. Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11919) Remove the deprecated pre/postGet CP hook
[ https://issues.apache.org/jira/browse/HBASE-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127175#comment-14127175 ] stack commented on HBASE-11919: --- +1 Remove the deprecated pre/postGet CP hook - Key: HBASE-11919 URL: https://issues.apache.org/jira/browse/HBASE-11919 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 2.0.0 Attachments: HBASE-11919.patch These hooks, dealing with ListKeyValue, were deprecated since 0.96. We have 0.98, one more major version after that. Suggest this can be removed from 0.99 time. The impl in BaseRegionObserver is as below which can be very inefficient especially when we read from a DBE files. There we return not KeyValue but a new Cell impl (here by avoiding the need to copy value bytes) The KeyValueUtil.ensureKeyValue can kill this nice optimization if we come across this. {code} public void preGetOp(final ObserverContextRegionCoprocessorEnvironment e, final Get get, final ListCell results) throws IOException { // By default we are executing the deprecated preGet to support legacy RegionObservers // We may use the results coming in and we may return the results going out. ListKeyValue kvs = new ArrayListKeyValue(results.size()); for (Cell c : results) { kvs.add(KeyValueUtil.ensureKeyValue(c)); } preGet(e, get, kvs); results.clear(); results.addAll(kvs); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11760) Tighten up region state transition
[ https://issues.apache.org/jira/browse/HBASE-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127180#comment-14127180 ] stack commented on HBASE-11760: --- Would suggest you add what the colors mean to the doc then lets add to refguide. Thats really great Jimmy. Tighten up region state transition -- Key: HBASE-11760 URL: https://issues.apache.org/jira/browse/HBASE-11760 Project: HBase Issue Type: Improvement Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 2.0.0 Attachments: hbase-11760.patch, hbase-11760_2.1.patch, hbase-11760_2.patch, rsm-2.pdf, rsm.pdf, rsm.png When a regionserver reports to master a region transition, we should check the current region state to be exactly what we expect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11760) Tighten up region state transition
[ https://issues.apache.org/jira/browse/HBASE-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127190#comment-14127190 ] Jimmy Xiang commented on HBASE-11760: - Sure. Can I work on the refguide in a separate issue? Tighten up region state transition -- Key: HBASE-11760 URL: https://issues.apache.org/jira/browse/HBASE-11760 Project: HBase Issue Type: Improvement Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 2.0.0 Attachments: hbase-11760.patch, hbase-11760_2.1.patch, hbase-11760_2.patch, rsm-2.pdf, rsm.pdf, rsm.png When a regionserver reports to master a region transition, we should check the current region state to be exactly what we expect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-11839) TestRegionRebalance is flakey
[ https://issues.apache.org/jira/browse/HBASE-11839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127200#comment-14127200 ] Hadoop QA commented on HBASE-11839: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12667423/HBASE-11839-v1.patch against trunk revision . ATTACHMENT ID: 12667423 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10789//console This message is automatically generated. TestRegionRebalance is flakey - Key: HBASE-11839 URL: https://issues.apache.org/jira/browse/HBASE-11839 Project: HBase Issue Type: Bug Reporter: Alex Newman Assignee: Sergey Soldatov Fix For: 2.0.0, 0.98.7, 0.99.1 Attachments: HBASE-11839-v1.patch Besides failing many times on the prebuild TestRegionRebalance fails on my local machine eventually simply with export RUNNIN=true; mvn clean install -DskipTests ; while ($RUNNIN) ; do mvn test -Dtest=TestRegionRebalancing || RUNNIN=false;done -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-11920) Interface changes needed for creating CP hooks for ReplicationEndPoint
ramkrishna.s.vasudevan created HBASE-11920: -- Summary: Interface changes needed for creating CP hooks for ReplicationEndPoint Key: HBASE-11920 URL: https://issues.apache.org/jira/browse/HBASE-11920 Project: HBase Issue Type: Sub-task Reporter: ramkrishna.s.vasudevan Fix For: 2.0.0, 0.98.7, 0.99.1 If we want to create internal replication endpoints other than the one created thro configuration we may need new hooks. This is something like an internal scanner that we create during compaction so that the actual compaction scanner can be used as a delegator. [~enis] If I can give a patch by tomorrow will it be possible to include in the RC? -- This message was sent by Atlassian JIRA (v6.3.4#6332)