[jira] [Updated] (HBASE-12791) HBase does not attempt to clean up an aborted split when the regionserver shutting down

2015-01-11 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12791:
---
Attachment: HBASE-12791_branch1_v3.patch
HBASE-12791_98_v3.patch

Here are the updated patches I am going to commit now.

> HBase does not attempt to clean up an aborted split when the regionserver 
> shutting down
> ---
>
> Key: HBASE-12791
> URL: https://issues.apache.org/jira/browse/HBASE-12791
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0, 0.98.10, 1.0.1
>
> Attachments: HBASE-12791.patch, HBASE-12791_98.patch, 
> HBASE-12791_98_v2.patch, HBASE-12791_98_v3.patch, HBASE-12791_branch1.patch, 
> HBASE-12791_branch1_v2.patch, HBASE-12791_branch1_v3.patch, 
> HBASE-12791_v2.patch, HBASE-12791_v3.patch, HBASE-12791_v4.patch, 
> HBASE-12791_v4.patch, HBASE-12791_v5.patch, HBASE-12791_v6.patch, 
> HBASE-12791_v6.patch
>
>
> HBase not cleaning the daughter region directories from HDFS  if region 
> server shut down after creating the daughter region directories during the 
> split.
> Here the logs.
> -> RS shutdown after creating the daughter regions.
> {code}
> 2014-12-31 09:05:41,406 DEBUG [regionserver60020-splits-1419996941385] 
> zookeeper.ZKAssign: regionserver:60020-0x14a9701e53100d1, 
> quorum=localhost:2181, baseZNode=/hbase Transitioned node 
> 80c665138d4fa32da4d792d8ed13206f from RS_ZK_REQUEST_REGION_SPLIT to 
> RS_ZK_REQUEST_REGION_SPLIT
> 2014-12-31 09:05:41,514 DEBUG [regionserver60020-splits-1419996941385] 
> regionserver.HRegion: Closing 
> t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.: disabling compactions & 
> flushes
> 2014-12-31 09:05:41,514 DEBUG [regionserver60020-splits-1419996941385] 
> regionserver.HRegion: Updates disabled for region 
> t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.
> 2014-12-31 09:05:41,516 INFO  
> [StoreCloserThread-t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.-1] 
> regionserver.HStore: Closed f
> 2014-12-31 09:05:41,518 INFO  [regionserver60020-splits-1419996941385] 
> regionserver.HRegion: Closed 
> t,,1419996880699.80c665138d4fa32da4d792d8ed13206f.
> 2014-12-31 09:05:49,922 DEBUG [regionserver60020-splits-1419996941385] 
> regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl 
> for table t dd9731ee43b104da565257ca1539aa8c
> 2014-12-31 09:05:49,922 DEBUG [regionserver60020-splits-1419996941385] 
> regionserver.HRegion: Instantiated 
> t,,1419996941401.dd9731ee43b104da565257ca1539aa8c.
> 2014-12-31 09:05:49,929 DEBUG [regionserver60020-splits-1419996941385] 
> regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl 
> for table t 2e40a44511c0e187d357d651f13a1dab
> 2014-12-31 09:05:49,929 DEBUG [regionserver60020-splits-1419996941385] 
> regionserver.HRegion: Instantiated 
> t,row2,1419996941401.2e40a44511c0e187d357d651f13a1dab.
> Wed Dec 31 09:06:30 IST 2014 Terminating regionserver
> 2014-12-31 09:06:30,465 INFO  [Thread-8] regionserver.ShutdownHook: Shutdown 
> hook starting; hbase.shutdown.hook=true; 
> fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@42d2282e
> {code}
> -> Skipping rollback if RS stopped or stopping so we end up in dirty daughter 
> regions in HDFS.
> {code}
> 2014-12-31 09:07:49,547 INFO  [regionserver60020-splits-1419996941385] 
> regionserver.SplitRequest: Skip rollback/cleanup of failed split of 
> t,,1419996880699.80c665138d4fa32da4d792d8ed13206f. because server is stopped
> java.io.InterruptedIOException: Interrupted after 0 tries  on 350
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:156)
> {code}
> Because of this hbck always showing inconsistencies. 
> {code}
> ERROR: Region { meta => null, hdfs => 
> hdfs://localhost:9000/hbase/data/default/t/2e40a44511c0e187d357d651f13a1dab, 
> deployed =>  } on HDFS, but not listed in hbase:meta or deployed on any 
> region server
> ERROR: Region { meta => null, hdfs => 
> hdfs://localhost:9000/hbase/data/default/t/dd9731ee43b104da565257ca1539aa8c, 
> deployed =>  } on HDFS, but not listed in hbase:meta or deployed on any 
> region server
> {code}
> If we try to repair then we end up in overlap regions in hbase:meta. and both 
> daughter regions and parent are online.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12686) Failures in split left the daughter regions in transition forever even after rollback

2014-12-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu reassigned HBASE-12686:
--

Assignee: rajeshbabu

> Failures in split left the daughter regions in transition forever even after 
> rollback
> -
>
> Key: HBASE-12686
> URL: https://issues.apache.org/jira/browse/HBASE-12686
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.98.9
>Reporter: Rajeshbabu Chintaguntla
>Assignee: rajeshbabu
> Fix For: 0.98.10
>
>
> If there are any split failures then the both daughter regions left in 
> transition even after rollback, which will block balancing to happen forever 
> until unless master is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to 0.98,branch-1 and master branches.
Thanks all for the reviews.

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_98.patch, 
> HBASE-12583_98_v2.patch, HBASE-12583_addendum.patch, 
> HBASE-12583_branch1.patch, HBASE-12583_branch1_v2.patch, 
> HBASE-12583_v2.patch, HBASE-12583_v3.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583_addendum.patch

Here is the addendum corrects the javadoc.

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_98.patch, 
> HBASE-12583_98_v2.patch, HBASE-12583_addendum.patch, 
> HBASE-12583_branch1.patch, HBASE-12583_branch1_v2.patch, 
> HBASE-12583_v2.patch, HBASE-12583_v3.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-09 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14239655#comment-14239655
 ] 

rajeshbabu commented on HBASE-12583:


[~anoop.hbase]
I will update the javadoc and commit. Thanks

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_98.patch, 
> HBASE-12583_98_v2.patch, HBASE-12583_branch1.patch, 
> HBASE-12583_branch1_v2.patch, HBASE-12583_v2.patch, HBASE-12583_v3.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583_v3.patch

Here are the patches addressing Stack's comment. Passing split policy instead 
of boolean for HRegionFileSystem#splitStoreFile.


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_98.patch, 
> HBASE-12583_98_v2.patch, HBASE-12583_branch1.patch, 
> HBASE-12583_branch1_v2.patch, HBASE-12583_v2.patch, HBASE-12583_v3.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583_98_v2.patch
HBASE-12583_branch1_v2.patch

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_98.patch, 
> HBASE-12583_98_v2.patch, HBASE-12583_branch1.patch, 
> HBASE-12583_branch1_v2.patch, HBASE-12583_v2.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-02 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583_98.patch

Here is the patch for 0.98 branch. I will commit tomorrow if no objection. 


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_98.patch, 
> HBASE-12583_branch1.patch, HBASE-12583_v2.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-02 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583_branch1.patch

Here is the patch for branch-1.


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 1.0.0, 2.0.0, 0.98.9
>
> Attachments: HBASE-12583.patch, HBASE-12583_branch1.patch, 
> HBASE-12583_v2.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-01 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14229713#comment-14229713
 ] 

rajeshbabu commented on HBASE-12583:


[~anoop.hbase]
bq.  The policy in this child region will also be same as that in parent? In 
that case new getter in HRegion also not needed
Yes the policy for child regions also same as parent. But we doesn't have child 
regions reference in storefilesplitter. Also the split policy is private member 
in HRegion so we should have getter to access from any region.
bq. Can keep as protected?
It can be protected. I will update while committing.
Thanks.

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch, HBASE-12583_v2.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-01 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14229546#comment-14229546
 ] 

rajeshbabu commented on HBASE-12583:


bq. I would say that would be better because we are using the split policy 
associated with the region. If there is a setter in HRegionFileSystem it means 
that the FileSystem itself has a split policy.
It makes sense. Thanks for review [~ram_krish] [~stack].
Here is the updated patch.


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch, HBASE-12583_v2.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-12-01 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583_v2.patch

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch, HBASE-12583_v2.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-27 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14227595#comment-14227595
 ] 

rajeshbabu commented on HBASE-12583:


[~ram_krish]
To pass the boolean in splitStoreFile we need to expose split policy from 
HRegion and get it in SplitTransaction.


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-27 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14227571#comment-14227571
 ] 

rajeshbabu edited comment on HBASE-12583 at 11/27/14 11:53 AM:
---

Here is the patch skip the storefile range check in 
HRegionFileSystem#splitStoreFile

[~stack]
Actually I thought of adding special attribute for table to skip the check.
But Ram suggested to do this using RegionSplitPolicy so that it will be 
meaningful and clean. So I have done the same way.

Please review.



was (Author: rajesh23):
Here is the patch skip the storefile range check in 
HRegionFileSystem#splitStoreFile

[~stack]
Actually I thought of adding special attribute for table to skip the check.
But Ram suggested to this using RegionSplitPolicy so that it will be meaningful 
and clean. So I have done the same way.

Please review.


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-27 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Status: Patch Available  (was: Open)

Here is the patch skip the storefile range check in 
HRegionFileSystem#splitStoreFile

[~stack]
Actually I thought of adding special attribute for table to skip the check.
But Ram suggested to this using RegionSplitPolicy so that it will be meaningful 
and clean. So I have done the same way.

Please review.


> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-27 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-12583:
---
Attachment: HBASE-12583.patch

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12583.patch
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-26 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14226004#comment-14226004
 ] 

rajeshbabu commented on HBASE-12583:


bq. What would you need to get it going again?
What I feel is we can have a table attribute and skip the range check if it's 
true other wise continue as usual. For any of this kind of use cases the 
attribute can be used.

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-26 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225965#comment-14225965
 ] 

rajeshbabu commented on HBASE-12583:


bq.So, if a split key is bigger or smaller than storefile, we don't want to 
split the storefile; the file goes to the left or right of the split point; a 
split point that is not in a storefile is fine.
Absolutely correct [~stack].
bq. They are companion regions? 
Yes even after split the data regions key ranges and index region key ranges 
will be same.
bq. Can't you split them by passing in a pertinent split key, one related to 
that of the primary region but adapted for the companion region? 
The index storefile the data is sorted by column value order and data row key 
suffixed at the end of index rowkey, we cannot find the exact split key for the 
index from the actual split point.
bq. Are you passing in 'wrong' key, the split key for primary region?
Yes. while rewriting corresponding half of the storefile to daughter regions we 
parse the data rowkey(from index rowkey) and compare with  actual split row to 
decide it's left or right part of split point.
bq. This stuff used to work for you but now the checks are more stringent, it 
breaks you?
Yes. It's used to work in 0.94.

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-26 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225894#comment-14225894
 ] 

rajeshbabu commented on HBASE-12583:


[~stack]
bq. This is for the secondary index with the companion regions?
Yes [~stack]. It's for region level secondary indexing.
bq.  we cannot support custom half-store readers and the writing of reference 
files outside of a region's key range.
The split row definitely in the region's key range but may not in the storefile 
key range. Earlier we were creating reference files even the split row out side 
the storefile range but as part of some performance improvement added the check 
to skip if it's outside.
The custom half-store reader is extends StoreFile.reader and as Ram told we are 
creating it in coprocessor hooks. 
bq. Can this be done at a higher level, at the CP level?
not able to do this through coprocessors.

Thanks.

> Allow creating reference files even the split row not lies in the storefile 
> range if required
> -
>
> Key: HBASE-12583
> URL: https://issues.apache.org/jira/browse/HBASE-12583
> Project: HBase
>  Issue Type: Improvement
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>  Labels: Phoenix
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
>
> Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
> files if the split row not lies in the storefile range that means one of the 
> child region doesn't have any data.
> {code}
>// Check whether the split row lies in the range of the store file
> // If it is outside the range, return directly.
> if (top) {
>   //check if larger than last key.
>   KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
>   byte[] lastKey = f.createReader().getLastKey();
>   // If lastKey is null means storefile is empty.
>   if (lastKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
> lastKey.length) > 0) {
> return null;
>   }
> } else {
>   //check if smaller than first key
>   KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
>   byte[] firstKey = f.createReader().getFirstKey();
>   // If firstKey is null means storefile is empty.
>   if (firstKey == null) return null;
>   if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
>   splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
> firstKey.length) < 0) {
> return null;
>   }
> }
> {code}
> In some cases when split row should be compared with part of rowkey(in 
> composite rowkey) mainly for secondary index tables we need to create 
> reference files even when split row not lies in the storefile range so that 
> they can be rewritten to it's child regions by some custom half store file 
> reader which compare the part of row key with split row.
> The check of comparing split row with storefile range and returning directly 
> can be avoided by having special boolean attribute in table descriptor when 
> it set to true. Or else we can have coprocessor hooks so that in the hooks we 
> can create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12583) Allow creating reference files even the split row not lies in the storefile range if required

2014-11-25 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-12583:
--

 Summary: Allow creating reference files even the split row not 
lies in the storefile range if required
 Key: HBASE-12583
 URL: https://issues.apache.org/jira/browse/HBASE-12583
 Project: HBase
  Issue Type: Improvement
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 2.0.0, 0.98.9, 0.99.2


Currently in HRegionFileSystem#splitStoreFile we are not creating reference 
files if the split row not lies in the storefile range that means one of the 
child region doesn't have any data.
{code}
   // Check whether the split row lies in the range of the store file
// If it is outside the range, return directly.
if (top) {
  //check if larger than last key.
  KeyValue splitKey = KeyValueUtil.createFirstOnRow(splitRow);
  byte[] lastKey = f.createReader().getLastKey();
  // If lastKey is null means storefile is empty.
  if (lastKey == null) return null;
  if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
  splitKey.getKeyOffset(), splitKey.getKeyLength(), lastKey, 0, 
lastKey.length) > 0) {
return null;
  }
} else {
  //check if smaller than first key
  KeyValue splitKey = KeyValueUtil.createLastOnRow(splitRow);
  byte[] firstKey = f.createReader().getFirstKey();
  // If firstKey is null means storefile is empty.
  if (firstKey == null) return null;
  if (f.getReader().getComparator().compareFlatKey(splitKey.getBuffer(),
  splitKey.getKeyOffset(), splitKey.getKeyLength(), firstKey, 0, 
firstKey.length) < 0) {
return null;
  }
}
{code}

In some cases when split row should be compared with part of rowkey(in 
composite rowkey) mainly for secondary index tables we need to create reference 
files even when split row not lies in the storefile range so that they can be 
rewritten to it's child regions by some custom half store file reader which 
compare the part of row key with split row.

The check of comparing split row with storefile range and returning directly 
can be avoided by having special boolean attribute in table descriptor when it 
set to true. Or else we can have coprocessor hooks so that in the hooks we can 
create the references and bypass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10200) Better error message when HttpServer fails to start due to java.net.BindException

2014-10-13 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10200:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to master and branch-1.
Thanks for the patch [~kiranmr].
Thanks for reviews Ted and Jeffrey.

> Better error message when HttpServer fails to start due to 
> java.net.BindException
> -
>
> Key: HBASE-10200
> URL: https://issues.apache.org/jira/browse/HBASE-10200
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Kiran Kumar M R
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 10200.out, HBASE-10020-V2.patch, HBASE-10020-V3.patch, 
> HBASE-10020.patch, HBASE-10020.patch
>
>
> Starting HBase using Hoya, I saw the following in log:
> {code}
> 2013-12-17 21:49:06,758 INFO  [master:hor12n19:42587] http.HttpServer: 
> HttpServer.start() threw a non Bind IOException
> java.net.BindException: Port in use: hor12n14.gq1.ygridcore.net:12432
> at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:742)
> at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)
> at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:586)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.net.BindException: Cannot assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:344)
> at sun.nio.ch.Net.bind(Net.java:336)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
> at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:738)
> {code}
> This was due to hbase.master.info.bindAddress giving static address but Hoya 
> allocates master dynamically.
> Better error message should be provided: when bindAddress points another host 
> than local host, message should remind user to remove / adjust 
> hbase.master.info.bindAddress config param from hbase-site.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10536) ImportTsv should fail fast if any of the column family passed to the job is not present in the table

2014-09-23 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145946#comment-14145946
 ] 

rajeshbabu commented on HBASE-10536:


[~dennyjoseph]
Thanks for the patch. Overall patch lgtm.

To find the new javadoc warnings just run following command.
{code}
mvn javadoc:javadoc
{code}

Once this patch is committed you will be added as contributor and assign it to 
you.

> ImportTsv should fail fast if any of the column family passed to the job is 
> not present in the table
> 
>
> Key: HBASE-10536
> URL: https://issues.apache.org/jira/browse/HBASE-10536
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
> Fix For: 2.0.0
>
> Attachments: HBASE-10536.patch, HBASE-10536.patch
>
>
> While checking 0.98 rc, running bulkload tools. By mistake passed wrong 
> column family to importtsv. LoadIncrementalHfiles failed with following 
> exception
> {code}
> Exception in thread "main" java.io.IOException: Unmatched family names found: 
> unmatched family names in HFiles to be bulkloaded: [f1]; valid family names 
> of table test are: [f]
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:241)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:823)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:828)
> {code}
>  
> Its better to fail fast if any of the passed column family is not present in 
> table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-09-08 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14125629#comment-14125629
 ] 

rajeshbabu commented on HBASE-10576:


[~stack]

Before answering to your questions first I just wanted to give some brackground 
about this issue. Sorry you are already aware most of the things.
Earlier at some point of time we have been asked for isolating the balancer 
from the index code and integrate the local indexing into phoenix.
https://issues.apache.org/jira/browse/HBASE-9203?focusedCommentId=13897450&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13897450
https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13919739&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13919739
Recently local-indexing feature has been integrated to Phoenix and it's using 
the same custom load balancer for colocation.
But with the custom load balancer, we will have the problems as Jeffrey pointed 
earlier.
https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13980572&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13980572

As an improvement, for stronger colocation I wanted to implement shadow regions 
feature suggested by Jeffrey.
https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13977655&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13977655
https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13978998&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13978998


bq. You do not answer how I turn on/off colocation.

HBASE-10536_v2.patch has custom load balancer implementation. To enable it we 
need to configure below property in hbase-site.xml at master side and restart 
it.
{code}

hbase.master.loadbalancer.class
org.apache.hadoop.hbase.master.balancer.IndexLoadBalancer

{code}

For the shadow regions I yet to make it enable/disable table wise.

bq. What is the plan for landing this secondary index feature?
We need to do some design level changes so that it can be acceptable for 
others. 
1) Proper APIs for specifying indexes 2) clear cut interfaces for preparing 
index updates 3) pluggable failure policies etc.
Most of the things are clear in phoenix and need to borrow from it or can reuse 
them.
 or else we need to have our secondary index code independently. Please suggest 
what do you feel Stack.
bq.  Is it to come in piecemeal via stuff like this intermediate change to the 
balancer?
No stack, we have just isolated the custom load balancer from our secondary 
index so that it will helpful in Phoenix. 
bq. What changes to core are required? 
Almost all the kernel changes are contributed but there are very minor kernel 
changes like changing access specifiers so that we can reuse HBase code for 
some functionality. 
I will list down and upload here.

Forgive me if I misunderstood. 
Thanks.


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch, 
> HBASE-10576_shadow_regions_wip.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-08 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14125339#comment-14125339
 ] 

rajeshbabu edited comment on HBASE-11894 at 9/8/14 9:25 AM:


bq. Do we need to handle similar requirements for other transition codes? 
Just we need to handle for SPLIT_PONR and MERGE_PONR only Jimmy.

bq. How to describe these transitions we report the master such that it can 
make sense of them? Is master recording these ops critical or is it ok if they 
are dropped?
The interface required for reporting existing transition(SPLIT_PONR and 
MERGE_PONR) only stack. These meta ops are important to ensure data region and 
index region split atomiticity in Phoenix local 
indexing(https://github.com/apache/phoenix/blob/4.0/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexSplitter.java).
 
Thanks.


was (Author: rajesh23):
bq. Do we need to handle similar requirements for other transition codes? 
Just we need to handle for SPLIT_PONR and MERGE_PONR only Jimmy.

bq. How to describe these transitions we report the master such that it can 
make sense of them? Is master recording these ops critical or is it ok if they 
are dropped?
These are existing transitions only Stack. These meta ops are important to 
ensure data region and index region split atomiticity. Passing the meta entries 
of index region split updates through coprocessor hooks in Phoenix(0.98+ 
onwards).

> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-08 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14125339#comment-14125339
 ] 

rajeshbabu commented on HBASE-11894:


bq. Do we need to handle similar requirements for other transition codes? 
Just we need to handle for SPLIT_PONR and MERGE_PONR only Jimmy.

bq. How to describe these transitions we report the master such that it can 
make sense of them? Is master recording these ops critical or is it ok if they 
are dropped?
These are existing transitions only Stack. These meta ops are important to 
ensure data region and index region split atomiticity. Passing the meta entries 
of index region split updates through coprocessor hooks in Phoenix(0.98+ 
onwards).

> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-05 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123085#comment-14123085
 ] 

rajeshbabu commented on HBASE-11894:


[~jxiang]
Currently the hooks are region level but moving them to system level hooks make 
not be good.
instead what about having new interface to pass any meta mutations while 
reporting region transition
{code}
  /**
   * Notify master that a handler requests to change a region state
   */
  boolean reportRegionStateTransition(TransitionCode code, HRegionInfo... 
hris,List mutations);
{code}

> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-05 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123068#comment-14123068
 ] 

rajeshbabu commented on HBASE-11894:


Let me try moving the hooks to master [~jxiang]. Thanks.

> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-05 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu reassigned HBASE-11894:
--

Assignee: rajeshbabu

> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-09-05 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122708#comment-14122708
 ] 

rajeshbabu commented on HBASE-10576:


[~stack]
bq. How is this balancer patch supposed to work?
Once this shadow region feature goes in the balancer patch is not required. 
We can ensure the co-location using coprocessors.
bq.  I turn on this feature for a table? Or is it for whole cluster?
It's region wise Stack. The shadow region is like offline region but not 
completely
(Offline at master side and can be opened/closed at region server through 
coprocessors).
If we make a region shadow then master considers it as offline and will not 
move/reassign
 in any case like master/RS failover. At the same time the region serves reads 
and writes
from external client if it's online at RS.

In case of secondary index,to ensure co-location we can make the index 
region(s) shadow
 and open/close while opening/closing data region(s) through coprocessors.
Since the regions cannot be moved by master they will be served by the RS 
holding data region always.

The shadow region operation can be performed similar like split or merge on a 
region.
If we want to make the table shadow we need to make all the regions shadow.

Thanks.

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch, 
> HBASE-10576_shadow_regions_wip.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-09-05 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14122604#comment-14122604
 ] 

rajeshbabu commented on HBASE-10576:


[~tedyu]
Thanks for review.
bq. Why is check of shadow tied to offline status ?
There is no relation to offline status..That need to be separated. It's 
mistake. I will correct it.
bq. Can you add javadoc for the above method ?
I will add
bq. What's the purpose for adding trailing delimiter ?
The delimeter is not required, will remove it.

Once I complete the patch I will put it in RB.


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch, 
> HBASE-10576_shadow_regions_wip.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-09-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10576:
---
Attachment: HBASE-10576_shadow_regions_wip.patch

Here is the WIP patch introducing new state 'shadow' to region so that 
AM/balancer can skip the assignment and at the same time the writes and reads 
can be served by the region.

To make the region a shadow region using transaction approach.

 [~jeffreyz]  is this approach fine for you? 
If it's ok, I will add more tests,APIs and utils if any required. 

Ping [~jxiang] [~stack]

Thanks.

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch, 
> HBASE-10576_shadow_regions_wip.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-09-04 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121903#comment-14121903
 ] 

rajeshbabu edited comment on HBASE-10576 at 9/4/14 8:07 PM:


Here is the WIP patch introducing new state 'shadow' to region so that 
AM/balancer can skip the assignment and at the same time the writes and reads 
can be served by the region.

To make the region a shadow region using transaction approach.

 [~jeffreyz]  is this approach fine for you? 
If it's ok, I will add more tests,APIs and utils tomorrow and upload new patch. 

Ping [~jxiang] [~stack]

Thanks.


was (Author: rajesh23):
Here is the WIP patch introducing new state 'shadow' to region so that 
AM/balancer can skip the assignment and at the same time the writes and reads 
can be served by the region.

To make the region a shadow region using transaction approach.

 [~jeffreyz]  is this approach fine for you? 
If it's ok, I will add more tests,APIs and utils if any required. 

Ping [~jxiang] [~stack]

Thanks.

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch, 
> HBASE-10576_shadow_regions_wip.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-04 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121478#comment-14121478
 ] 

rajeshbabu commented on HBASE-11894:


The region state changes need not pass [~anoop.hbase]. They will be properly 
handled by core itself. We need to pass split or merge updates of the regions 
handled in CP(but it's completely depend on CP implementation).


> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-04 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14121139#comment-14121139
 ] 

rajeshbabu commented on HBASE-11894:


ping [~jxiang], what do you say?

> MetaEntries from coprocessor hooks in split and merge are not getting added 
> to hbase:meta after HBASE-11611
> ---
>
> Key: HBASE-11894
> URL: https://issues.apache.org/jira/browse/HBASE-11894
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: rajeshbabu
> Fix For: 2.0.0
>
>
> As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
> take meta entries from coprocessor hooks if any. These meta entries helps to 
> ensure atomicity of split(merge) of regions by server and split(merge) of the 
> regions handled in coprocessors(This is required in secondary indexing case).
> After HBASE-11611 the meta entries are not getting added to meta both in 
> split and merge.
> {code}
> @MetaMutationAnnotation
> List metaEntries = new ArrayList();
> if (rsCoprocessorHost != null) {
>   if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
> metaEntries)) {
> throw new IOException("Coprocessor bypassing regions " + 
> this.region_a + " "
> + this.region_b + " merge.");
>   }
>   try {
> for (Mutation p : metaEntries) {
>   HRegionInfo.parseRegionName(p.getRow());
> }
>   } catch (IOException e) {
> LOG.error("Row key of mutation from coprocessor is not parsable as 
> region name."
> + "Mutations from coprocessor should only be for hbase:meta 
> table.", e);
> throw e;
>   }
> }
> // This is the point of no return. Similar with SplitTransaction.
> // IF we reach the PONR then subsequent failures need to crash out this
> // regionserver
> this.journal.add(JournalEntry.PONR);
> // Add merged region and delete region_a and region_b
> // as an atomic update. See HBASE-7721. This update to hbase:meta makes 
> the region
> // will determine whether the region is merged or not in case of failures.
> // If it is successful, master will roll-forward, if not, master will
> // rollback
> if (services != null && 
> !services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
> mergedRegionInfo, region_a.getRegionInfo(), 
> region_b.getRegionInfo())) {
>   // Passed PONR, let SSH clean it up
>   throw new IOException("Failed to notify master that merge passed PONR: "
> + region_a.getRegionInfo().getRegionNameAsString() + " and "
> + region_b.getRegionInfo().getRegionNameAsString());
> }
> {code}
> I think while reporting region state transition to master we need to pass 
> meta entries also so that we can add them to meta along with split or merge 
> updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-11894) MetaEntries from coprocessor hooks in split and merge are not getting added to hbase:meta after HBASE-11611

2014-09-04 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-11894:
--

 Summary: MetaEntries from coprocessor hooks in split and merge are 
not getting added to hbase:meta after HBASE-11611
 Key: HBASE-11894
 URL: https://issues.apache.org/jira/browse/HBASE-11894
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: rajeshbabu
 Fix For: 2.0.0


As part of HBASE-9249 & HBASE-9489, added new hooks in split and merge which 
take meta entries from coprocessor hooks if any. These meta entries helps to 
ensure atomicity of split(merge) of regions by server and split(merge) of the 
regions handled in coprocessors(This is required in secondary indexing case).

After HBASE-11611 the meta entries are not getting added to meta both in split 
and merge.
{code}
@MetaMutationAnnotation
List metaEntries = new ArrayList();
if (rsCoprocessorHost != null) {
  if (rsCoprocessorHost.preMergeCommit(this.region_a, this.region_b, 
metaEntries)) {
throw new IOException("Coprocessor bypassing regions " + this.region_a 
+ " "
+ this.region_b + " merge.");
  }
  try {
for (Mutation p : metaEntries) {
  HRegionInfo.parseRegionName(p.getRow());
}
  } catch (IOException e) {
LOG.error("Row key of mutation from coprocessor is not parsable as 
region name."
+ "Mutations from coprocessor should only be for hbase:meta 
table.", e);
throw e;
  }
}

// This is the point of no return. Similar with SplitTransaction.
// IF we reach the PONR then subsequent failures need to crash out this
// regionserver
this.journal.add(JournalEntry.PONR);

// Add merged region and delete region_a and region_b
// as an atomic update. See HBASE-7721. This update to hbase:meta makes the 
region
// will determine whether the region is merged or not in case of failures.
// If it is successful, master will roll-forward, if not, master will
// rollback
if (services != null && 
!services.reportRegionStateTransition(TransitionCode.MERGE_PONR,
mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) {
  // Passed PONR, let SSH clean it up
  throw new IOException("Failed to notify master that merge passed PONR: "
+ region_a.getRegionInfo().getRegionNameAsString() + " and "
+ region_b.getRegionInfo().getRegionNameAsString());
}
{code}

I think while reporting region state transition to master we need to pass meta 
entries also so that we can add them to meta along with split or merge updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-08-14 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14096747#comment-14096747
 ] 

rajeshbabu edited comment on HBASE-10576 at 8/14/14 8:44 AM:
-

Didn't find time to work on this issue because of some internal assignments and 
local indexing integration to phoenix. 
Now I have time to do this. I have started working on this..


was (Author: rajesh23):
Didn't find time to work because of some internal assignments and local 
indexing integration to phoenix. 
Now I have time to do this. I have started working on this..

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-08-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10576:
---

Status: In Progress  (was: Patch Available)

Didn't find time to work because of some internal assignments and local 
indexing integration to phoenix. 
Now I have time to do this. I have started working on this..

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11493) Autorestart option is not working because of stale znode "shutdown"

2014-07-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-11493:
---

  Resolution: Fixed
Assignee: nijel
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Autorestart option is not working because of stale znode  "shutdown"
> 
>
> Key: HBASE-11493
> URL: https://issues.apache.org/jira/browse/HBASE-11493
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Nishan Shetty
>Assignee: nijel
> Fix For: 0.99.0, 0.98.4, 2.0.0
>
> Attachments: HBASE-11493.patch, HBASE-11493.v1.patch
>
>
> In hbase-daemon.sh autorestart option the znode used is "shutdown"
> {code}
> zshutdown=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> zookeeper.znode.state`
> if [ "$zshutdown" == "null" ]; then zshutdown="shutdown"; fi
> zFullShutdown=$zparent/$zshutdown
> {code}
> the node shutdown is not available now and is changed to running.
> *Since this node is not available, script does not restart the node and it 
> exists.*
> So autorestart scripts needs to be changed to  make use of "running" znode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11493) Autorestart option is not working because of stale znode "shutdown"

2014-07-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14060359#comment-14060359
 ] 

rajeshbabu edited comment on HBASE-11493 at 7/14/14 5:50 AM:
-

committed to 0.98+ branches. 
Thanks for patch [~nijel]
Thanks for review Andrew,Ted.


was (Author: rajesh23):
committed to 0.98+ branches. Thanks for review Andrew,Ted.

> Autorestart option is not working because of stale znode  "shutdown"
> 
>
> Key: HBASE-11493
> URL: https://issues.apache.org/jira/browse/HBASE-11493
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Nishan Shetty
> Fix For: 0.99.0, 0.98.4, 2.0.0
>
> Attachments: HBASE-11493.patch, HBASE-11493.v1.patch
>
>
> In hbase-daemon.sh autorestart option the znode used is "shutdown"
> {code}
> zshutdown=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> zookeeper.znode.state`
> if [ "$zshutdown" == "null" ]; then zshutdown="shutdown"; fi
> zFullShutdown=$zparent/$zshutdown
> {code}
> the node shutdown is not available now and is changed to running.
> *Since this node is not available, script does not restart the node and it 
> exists.*
> So autorestart scripts needs to be changed to  make use of "running" znode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11493) Autorestart option is not working because of stale znode "shutdown"

2014-07-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14060359#comment-14060359
 ] 

rajeshbabu commented on HBASE-11493:


committed to 0.98+ branches. Thanks for review Andrew,Ted.

> Autorestart option is not working because of stale znode  "shutdown"
> 
>
> Key: HBASE-11493
> URL: https://issues.apache.org/jira/browse/HBASE-11493
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Nishan Shetty
> Fix For: 0.99.0, 0.98.4, 2.0.0
>
> Attachments: HBASE-11493.patch, HBASE-11493.v1.patch
>
>
> In hbase-daemon.sh autorestart option the znode used is "shutdown"
> {code}
> zshutdown=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> zookeeper.znode.state`
> if [ "$zshutdown" == "null" ]; then zshutdown="shutdown"; fi
> zFullShutdown=$zparent/$zshutdown
> {code}
> the node shutdown is not available now and is changed to running.
> *Since this node is not available, script does not restart the node and it 
> exists.*
> So autorestart scripts needs to be changed to  make use of "running" znode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11493) Autorestart option is not working because of stale znode "shutdown"

2014-07-10 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057649#comment-14057649
 ] 

rajeshbabu commented on HBASE-11493:


Good catch [~nijel]. +1
I will commit this tomorrow.

> Autorestart option is not working because of stale znode  "shutdown"
> 
>
> Key: HBASE-11493
> URL: https://issues.apache.org/jira/browse/HBASE-11493
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Nishan Shetty
> Fix For: 0.99.0, 0.98.4, 2.0.0
>
> Attachments: HBASE-11493.patch, HBASE-11493.v1.patch
>
>
> In hbase-daemon.sh autorestart option the znode used is "shutdown"
> {code}
> zshutdown=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> zookeeper.znode.state`
> if [ "$zshutdown" == "null" ]; then zshutdown="shutdown"; fi
> zFullShutdown=$zparent/$zshutdown
> {code}
> the node shutdown is not available now and is changed to running.
> *Since this node is not available, script does not restart the node and it 
> exists.*
> So autorestart scripts needs to be changed to  make use of "running" znode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11493) Autorestart option is not working because of stale znode "shutdown"

2014-07-10 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-11493:
---

Fix Version/s: 2.0.0
   0.98.4
   0.99.0

> Autorestart option is not working because of stale znode  "shutdown"
> 
>
> Key: HBASE-11493
> URL: https://issues.apache.org/jira/browse/HBASE-11493
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.3
>Reporter: Nishan Shetty
> Fix For: 0.99.0, 0.98.4, 2.0.0
>
> Attachments: HBASE-11493.patch, HBASE-11493.v1.patch
>
>
> In hbase-daemon.sh autorestart option the znode used is "shutdown"
> {code}
> zshutdown=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> zookeeper.znode.state`
> if [ "$zshutdown" == "null" ]; then zshutdown="shutdown"; fi
> zFullShutdown=$zparent/$zshutdown
> {code}
> the node shutdown is not available now and is changed to running.
> *Since this node is not available, script does not restart the node and it 
> exists.*
> So autorestart scripts needs to be changed to  make use of "running" znode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11460) Deadlock in HMaster on masterAndZKLock in HConnectionManager

2014-07-08 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14054876#comment-14054876
 ] 

rajeshbabu commented on HBASE-11460:


+1

> Deadlock in HMaster on masterAndZKLock in HConnectionManager
> 
>
> Key: HBASE-11460
> URL: https://issues.apache.org/jira/browse/HBASE-11460
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.96.0
>Reporter: Andrey Stepachev
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 0.99.0
>
> Attachments: 11460-v1.txt, threads.tdump
>
>
> On one of our clusters we got a deadlock in HMaster.
> In a nutshell deadlock caused by using one HConnectionManager for serving 
> client-like calls and calls from HMaster RPC handlers.
> HBaseAdmin uses HConnectionManager which takes a lock masterAndZKLock.
> On the other side of this game sits TablesNamespaceManager (TNM). This class 
> uses HConnectionManager too (in my case for getting list of available 
> namespaces). 
> Problem is that HMaster class uses TNM  for serving RPC requests.
> If we look at TNM more closely, we can see, that this class is totally 
> synchronised.
> Thats gives us a problem.
> WebInterface calls request via HConnectionManager and locks masterAndZKLock.
> Connection is blocking, so RpcClient will spin, awaiting for reply (while 
> holding lock).
> That how it looks like in thread dump:
> {code}
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xc8905430> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
>   - locked <0xc8905430> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:40216)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(HConnectionManager.java:1467)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(HConnectionManager.java:2093)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1819)
>   - locked <0xd15dc668> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$MasterCallable.prepare(HBaseAdmin.java:3187)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
>   - locked <0xcd0c1238> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96)
>   - locked <0xcd0c1238> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3214)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.listTableDescriptorsByNamespace(HBaseAdmin.java:2265)
> {code}
> Some other client call any HMaster RPC, and it calls TablesNamespaceManager 
> methods, which in turn will block on HConnectionManager global lock 
> masterAndZKLock.
> That how it looks like:
> {code}
>   java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1699)
>   - waiting to lock <0xd15dc668> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:100)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isTableDisabled(HConnectionManager.java:874)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:1027)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:852)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
>   - locked <0xcd0ef108> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:705)
>   at 
> org.apache.hadoo

[jira] [Resolved] (HBASE-11113) clone_snapshot command prints wrong name upon error

2014-05-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu resolved HBASE-3.


Resolution: Duplicate

[~apurtell] This is fixed as part of HBASE-10533 in all the branches. hence 
closing as duplicate. 
{code}
hbase(main):001:0> clone_snapshot 'mySnap','SYSTEM.CATALOG'

ERROR: Table already exists: SYSTEM.CATALOG!
{code}
Thanks.

> clone_snapshot command prints wrong name upon error
> ---
>
> Key: HBASE-3
> URL: https://issues.apache.org/jira/browse/HBASE-3
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: Andrew Purtell
>Priority: Trivial
> Fix For: 0.99.0, 0.98.3
>
>
> hbase> clone_snapshot 'snapshot', 'existing_table_name'
> ERROR: Table already exists: snapshot!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11113) clone_snapshot command prints wrong name upon error

2014-05-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-3:
---

Assignee: (was: Rekha Joshi)

> clone_snapshot command prints wrong name upon error
> ---
>
> Key: HBASE-3
> URL: https://issues.apache.org/jira/browse/HBASE-3
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: Andrew Purtell
>Priority: Trivial
> Fix For: 0.99.0, 0.98.3
>
>
> hbase> clone_snapshot 'snapshot', 'existing_table_name'
> ERROR: Table already exists: snapshot!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-25 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980728#comment-13980728
 ] 

rajeshbabu edited comment on HBASE-10576 at 4/25/14 9:34 AM:
-

If I understand the [~jeffreyz] idea in the above 
comment(https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13978998&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13978998)
 correctly, 
I think we can do the same without any changes in the kernel.
The steps are like this
1) Before creating index table we need to disable user table.
2) create index table with the same split keys as user table and disable it. So 
next time onwards index regions wont be assigned anymore by AM.(Need not have 
any special state like shadow)
3) Enable user table. While opening user region we can get corresponing index 
region from meta,htd from namenode and just open it with below method
{code}
  public static HRegion openHRegion(final HRegionInfo info,
  final HTableDescriptor htd, final HLog wal,
  final Configuration conf)
  throws IOException {
return openHRegion(info, htd, wal, conf, null, null);
  }
{code}
After that maintain user region and index region mapping.
4) While scanning if query conditions involves covering indexes we can just 
scan index region in the hooks and skip scanning user region by bypass.
Otherwise get rowkeys from index region and seek to rowkey in the user region 
and get required information.

I will do prototype of this and see any problems down the like mainly during 
split or merge. Mostly there should not be any problem. 
what do you say [~jeffreyz]?



was (Author: rajesh23):
If I understand the [~jeffreyz] idea in the above 
comment(https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13978998&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13978998)
 correctly, 
I think we can do the same without any changes in the kernel.
The steps are like this
1) Before creating index table we need to disable user table.
2) create index table with the same split keys as user table and disable it. So 
next time onwards index regions wont be assigned anymore by AM.(Need not have 
any special state like shadow)
3) Enable user table. While opening user region we can get corresponing index 
region from meta,htd from namenode and just open it with below method
{code}
  public static HRegion openHRegion(final HRegionInfo info,
  final HTableDescriptor htd, final HLog wal,
  final Configuration conf)
  throws IOException {
return openHRegion(info, htd, wal, conf, null, null);
  }
{code}
After that maintain user region and index region mapping.
4) While scanning if query conditions involves covering indexes we can just 
scan index region in the hooks and skip scanning user region by bypass.
Otherwise get rowkeys from index region and seek to rowkey in the user region 
and get required information.


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-25 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980824#comment-13980824
 ] 

rajeshbabu commented on HBASE-10576:


[~stack]
bq. Please tell me more how this balancer works. It is not clear skimming the 
code. 'colocate' is how you term data and index tables going together?
Main idea of the balancer is to select same plan for a user region and it's 
corresponding index region at any time. 
The custom balancer is wrapper on top of normal balancer like 
StochasticLoadBalancer(which will be used as delegator). 
The balancer maintains the region plans of user table and corresponding index 
table in a map.We call it as co-location info. 
When AM requests for a user(index) regions plans first we will check any plans 
available for corresponding index(user) regions in the co-location info. 
If available same will be selected as plans otherwise generate the plans in any 
fashion like round-robin/random/retain by the delegator and add them
to co-location info.

When a region is offline the corresponding region plan will be removed from the 
co-location info.

Thanks.

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-24 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980728#comment-13980728
 ] 

rajeshbabu edited comment on HBASE-10576 at 4/25/14 6:06 AM:
-

If I understand the [~jeffreyz] idea in the above 
comment(https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13978998&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13978998)
 correctly, 
I think we can do the same without any changes in the kernel.
The steps are like this
1) Before creating index table we need to disable user table.
2) create index table with the same split keys as user table and disable it. So 
next time onwards index regions wont be assigned anymore by AM.(Need not have 
any special state like shadow)
3) Enable user table. While opening user region we can get corresponing index 
region from meta,htd from namenode and just open it with below method
{code}
  public static HRegion openHRegion(final HRegionInfo info,
  final HTableDescriptor htd, final HLog wal,
  final Configuration conf)
  throws IOException {
return openHRegion(info, htd, wal, conf, null, null);
  }
{code}
After that maintain user region and index region mapping.
4) While scanning if query conditions involves covering indexes we can just 
scan index region in the hooks and skip scanning user region by bypass.
Otherwise get rowkeys from index region and seek to rowkey in the user region 
and get required information.



was (Author: rajesh23):
If I understand the [~jeffreyz] idea in the above 
comment(https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13978998&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13978998)
 correctly, 
I think we can do the same without any changes in the kernel.
The steps are like this
1) Before creating index table we need to disable user table.
2) create index table with the same split keys as user table and disable it. So 
next time onwards index regions wont be assigned anymore by AM.(Need not have 
any special state like shadow)
3) Enable user table. While opening user region we can get corresponing index 
region from meta,htd from namenode and just open it with below method
{code}
  public static HRegion openHRegion(final HRegionInfo info,
  final HTableDescriptor htd, final HLog wal,
  final Configuration conf)
  throws IOException {
return openHRegion(info, htd, wal, conf, null, null);
  }
{code}
After taht maintain user region and index region mapping.
4) While scanning if query conditions involves covering indexes we can just 
scan index region in the hooks and skip scanning user region through bypass.
Otherwise get rowkeys from index region and seek to rowkey in the user region 
and get required information.


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-24 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980728#comment-13980728
 ] 

rajeshbabu commented on HBASE-10576:


If I understand the [~jeffreyz] idea in the above 
comment(https://issues.apache.org/jira/browse/HBASE-10576?focusedCommentId=13978998&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13978998)
 correctly, 
I think we can do the same without any changes in the kernel.
The steps are like this
1) Before creating index table we need to disable user table.
2) create index table with the same split keys as user table and disable it. So 
next time onwards index regions wont be assigned anymore by AM.(Need not have 
any special state like shadow)
3) Enable user table. While opening user region we can get corresponing index 
region from meta,htd from namenode and just open it with below method
{code}
  public static HRegion openHRegion(final HRegionInfo info,
  final HTableDescriptor htd, final HLog wal,
  final Configuration conf)
  throws IOException {
return openHRegion(info, htd, wal, conf, null, null);
  }
{code}
After taht maintain user region and index region mapping.
4) While scanning if query conditions involves covering indexes we can just 
scan index region in the hooks and skip scanning user region through bypass.
Otherwise get rowkeys from index region and seek to rowkey in the user region 
and get required information.


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-24 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980629#comment-13980629
 ] 

rajeshbabu commented on HBASE-10576:


[~jeffreyz]
bq. For example, when an index region move, there is no guarantee that data 
region is also moved and opened simultaneously on the same RS. 
We are handling region movements properly.
1) If regions should be balanced to other region server then custom balancer 
selects both data and index regions together so both will be moved 
simultaneousily. 
2) If user explicitly move data/index region corresponding region will be moved 
through the hooks.
3) When region server down also both data and index regions will be assigned to 
some other RS simultaneously.

bq.  It's possible that during an index update a region move could happen and 
the update survives the region move because of retires. So there are chances 
index region & data region on different RSs even within one update operation.
We are handling updates in such a way that no region will be closed in the 
middle of updates. So there is no chance of inconsistencies. If the region 
assignments in the middle like user region successfully assigned and index 
region assignment in progress, we are not allowing puts to any region, where we 
can retry updates.

Still you wanted go with shadow regions?
Thanks.

> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-23 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977915#comment-13977915
 ] 

rajeshbabu commented on HBASE-10576:


[~jeffreyz]
I too thought the similar approach like opening and closing index regions 
through coprocesors but not exactly like shadow region concept.
I thought many problems with this approach and so didn't propose it. One good 
thing with this approach is we can guarantee  100% co-location.

Can you explain more how do we handle these?
1) How index table will be created? To avoid listing the index regions in meta 
during table creation, do we need to divide the table creation into pieces and 
just create directories/files needed for index table regions?
2) How to open the shadow regions if user regions already serving by some RS. 
Do we need to send region open request from client(At present we cannot do 
this) or user table should be disabled and enable back?
3) How do we find corresponding index region to be opened in coprocessors if we 
dont have region info in META? some how from file system?


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-17 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13914057#comment-13914057
 ] 

rajeshbabu edited comment on HBASE-10576 at 4/17/14 1:36 PM:
-

[~jamestaylor]
Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions. No region wise coprocessor hooks needed for 
this.

It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}



was (Author: rajesh23):
Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions.
It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10576) Custom load balancer to co-locate the regions of two tables which are having same split keys

2014-04-17 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13914057#comment-13914057
 ] 

rajeshbabu edited comment on HBASE-10576 at 4/17/14 1:36 PM:
-


Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions. 

It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}



was (Author: rajesh23):
[~jamestaylor]
Here is the custom load balancer ensures co-location of user table regions and 
correponding index table regions. No region wise coprocessor hooks needed for 
this.

It is wrapper over normal load balancer like StochasticLoadBalancer or any other
which can be configurable(the configuration is 
hbase.index.balancer.delegator.class).

*Before creating index table we should add both user table and index table to 
balancer. 
We may need to populate user table region locations from master to balancer.
{code}
IndexLoadBalancer#addTablesToColocate();
IndexLoadBalancer#populateRegionLocations();
{code}

*Similary while droping table we can remove the tables from colocation
{code}
IndexLoadBalancer#removeTablesFromColocation().
{code}
The above steps can be done through master coprocessor hooks because no direct 
client APIs for this.
Hooks implemented in TestIndexLoadBalancer.MockedMasterObserver gives some 
basic idea.

*We need set parent table attribute to index table descriptor to repopulate 
tables to colocate on master startup.
{code}
htd.setValue(IndexLoadBalancer.PARENT_TABLE_KEY, userTableName.toBytes());
{code}


> Custom load balancer to co-locate the regions of two tables which are having 
> same split keys
> 
>
> Key: HBASE-10576
> URL: https://issues.apache.org/jira/browse/HBASE-10576
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Attachments: HBASE-10536_v2.patch, HBASE-10576.patch
>
>
> To support local indexing both user table and index table should have same 
> split keys. This issue to provide custom balancer to colocate the regions of 
> two tables which are having same split keys. 
> This helps in Phoenix as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-04-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10533:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to all versions. Hence closing.
Thanks all for review.

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10533_94.patch, HBASE-10533_trunk.patch, 
> HBASE-10533_v2.patch, HBASE-10533_v3.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-04-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10533:
---

Attachment: HBASE-10533_v3.patch

Fixed format issues in trunk patch. Same can be applied for 0.96 and 0.98.
I will commit tonight IST if no objection.

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10533_94.patch, HBASE-10533_trunk.patch, 
> HBASE-10533_v2.patch, HBASE-10533_v3.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-04-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10533:
---

Attachment: HBASE-10533_94.patch

Here is the patch for 0.94. Checked all the cases and patch working fine.

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10533_94.patch, HBASE-10533_trunk.patch, 
> HBASE-10533_v2.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-04-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13968111#comment-13968111
 ] 

rajeshbabu commented on HBASE-10533:


Thanks [~jdcryans] for checking this. I will make patch for 0.94 as well.
Today I will commit [~lhofhansl].

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10533_trunk.patch, HBASE-10533_v2.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-04-02 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957459#comment-13957459
 ] 

rajeshbabu edited comment on HBASE-10850 at 4/2/14 8:32 AM:


[~anoop.hbase]
bq.  Any chance you got to test the scan perf with HBase 98 or Trunk?
Not done any performance tests with 0.98 or trunk. 

I too found this issue when I am porting secondary index patch to trunk. One of 
the test case with SCVFs is passing in 0.94 but not in trunk. 
Some how I have missed to report it. 




was (Author: rajesh23):
bq.  Any chance you got to test the scan perf with HBase 98 or Trunk?
Not tested Anoop.

I too found this issue when I am porting secondary index patch to trunk. One of 
the test case with SCVFs is passing in 0.94 but not in trunk. 
Some how I have missed to report it. 



> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Priority: Critical
> Attachments: 10850-hasFilterRow-v1.txt, 10850-hasFilterRow-v2.txt, 
> 10850-hasFilterRow-v3.txt, HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java, 
> TestWithMiniCluster.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-04-02 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957459#comment-13957459
 ] 

rajeshbabu commented on HBASE-10850:


bq.  Any chance you got to test the scan perf with HBase 98 or Trunk?
Not tested Anoop.

I too found this issue when I am porting secondary index patch to trunk. One of 
the test case with SCVFs is passing in 0.94 but not in trunk. 
Some how I have missed to report it. 



> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Priority: Critical
> Attachments: 10850-hasFilterRow-v1.txt, 10850-hasFilterRow-v2.txt, 
> 10850-hasFilterRow-v3.txt, HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java, 
> TestWithMiniCluster.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-03-18 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938897#comment-13938897
 ] 

rajeshbabu edited comment on HBASE-10533 at 3/18/14 7:30 AM:
-

[~jdcryans]
When we receive the exception from server,full stack trace is getting appended 
to the message.
In the current patch extracting table name from the message. Its working fine 
for all the cases.

Thanks.





was (Author: rajesh23):
[~jdcryans]
When we receive the exception from server full stack trace is getting appended 
to it.
In the current patch extracting table name from the message. Its working fine 
for all the cases.

Thanks.




> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.99.0, 0.94.19, 0.98.2
>
> Attachments: HBASE-10533_trunk.patch, HBASE-10533_v2.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-03-18 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10533:
---

Attachment: HBASE-10533_v2.patch

[~jdcryans]
When we receive the exception from server full stack trace is getting appended 
to it.
In the current patch extracting table name from the message. Its working fine 
for all the cases.

Thanks.




> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.99.0, 0.94.19, 0.98.2
>
> Attachments: HBASE-10533_trunk.patch, HBASE-10533_v2.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10748) hbase-daemon.sh fails to execute with 'sh' command

2014-03-17 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938830#comment-13938830
 ] 

rajeshbabu commented on HBASE-10748:


[~ashish singhi]
Good find. I have checked the patch. Its working fine.
+1

> hbase-daemon.sh fails to execute with 'sh' command
> --
>
> Key: HBASE-10748
> URL: https://issues.apache.org/jira/browse/HBASE-10748
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.11
>Reporter: Ashish Singhi
> Attachments: HBASE-10748.patch, HBASE-10748.patch
>
>
> hostname:HBASE_HOME/bin # sh hbase-daemon.sh restart master
> *hbase-daemon.sh: line 188: hbase-daemon.sh: command not found*
> *hbase-daemon.sh: line 196: hbase-daemon.sh: command not found*



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10769) hbase/bin/hbase-cleanup.sh has wrong usage string

2014-03-16 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13937479#comment-13937479
 ] 

rajeshbabu commented on HBASE-10769:


+1

> hbase/bin/hbase-cleanup.sh has wrong usage string
> -
>
> Key: HBASE-10769
> URL: https://issues.apache.org/jira/browse/HBASE-10769
> Project: HBase
>  Issue Type: Improvement
>  Components: Usability
>Affects Versions: 0.96.1, 0.98.1
>Reporter: Vamsee Yarlagadda
>Priority: Trivial
> Fix For: 0.98.2
>
> Attachments: HBASE-10769-v0.patch
>
>
> Looks like hbase-cleanup,sh has wrong Usage string.
> https://github.com/apache/hbase/blob/trunk/bin/hbase-cleanup.sh#L34
> Current Usage string:
> {code}
> [systest@search-testing-c5-ncm-1 ~]$ echo 
> `/usr/lib/hbase/bin/hbase-cleanup.sh`
> Usage: hbase-cleanup.sh (zk|hdfs|all)
> {code}
> But ideally digging into the logic of hbase-cleanup.sh, it should be modified 
> to
> {code}
> [systest@search-testing-c5-ncm-1 ~]$ echo 
> `/usr/lib/hbase/bin/hbase-cleanup.sh`
> Usage: hbase-cleanup.sh (--cleanZk|--cleanHdfs|--cleanAll)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-03-15 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13936224#comment-13936224
 ] 

rajeshbabu commented on HBASE-10533:


bq.  Seems good to commit.
I too feel the same [~lhofhansl]. The fix will work properly with JRuby version 
1.6.8 which we are giving in our releases.

With other versions of JRuby the first part of fix may not work(HBASE-9211). 
Not even this hbase shell starting if self fail with JRuby 1.7.0.
{code}
2014-02-18 12:43:02,544 INFO  [main] Configuration.deprecation: 
hadoop.native.lib is deprecated. Instead, use io.native.lib.available
NoMethodError: undefined method `getTerminal' for Java::Jline::Terminal:Module
  refresh_width at 
/home/rajeshbabu/98/hbase-0.98.0/bin/../lib/ruby/shell/formatter.rb:33
 initialize at 
/home/rajeshbabu/98/hbase-0.98.0/bin/../lib/ruby/shell/formatter.rb:46
 (root) at /home/rajeshbabu/98/hbase-0.98.0/bin/../bin/hirb.rb:110
{code}

ping [~saint@gmail.com],[~jdcryans],[~matteobanerjee]
Is it ok to commit or do we need to handle issues with other JRuby versions?

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.99.0, 0.94.19, 0.98.2
>
> Attachments: HBASE-10533_trunk.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10764) TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too long time(around 10 min.)

2014-03-15 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13936188#comment-13936188
 ] 

rajeshbabu edited comment on HBASE-10764 at 3/15/14 2:16 PM:
-

Here is the patch for trunk. In the test reduced retry count to 2 to come out 
quickly on bulkload failure.
Its taking few seconds only.


was (Author: rajesh23):
Here is the patch for trunk. Just reduced retry count to 2 on bulkload failure.
Its taking few seconds only.

> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too 
> long time(around 10 min.)
> 
>
> Key: HBASE-10764
> URL: https://issues.apache.org/jira/browse/HBASE-10764
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.99.0, 0.98.2
>
> Attachments: HBASE-10764.patch
>
>
> While running 
> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure found its 
> taking longer time. Checked in the builds also. Its taking too long time to 
> complete.
> {code}
> testBulkLoadPhaseFailure  9 min 14 secPassed
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10764) TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too long time(around 10 min.)

2014-03-15 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13936198#comment-13936198
 ] 

rajeshbabu commented on HBASE-10764:


{code}
fail("doBulkLoad should have thrown an exception");}
{code}
Here code format missed with brace.I will change this on commit.

> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too 
> long time(around 10 min.)
> 
>
> Key: HBASE-10764
> URL: https://issues.apache.org/jira/browse/HBASE-10764
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.99.0, 0.98.2
>
> Attachments: HBASE-10764.patch
>
>
> While running 
> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure found its 
> taking longer time. Checked in the builds also. Its taking too long time to 
> complete.
> {code}
> testBulkLoadPhaseFailure  9 min 14 secPassed
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10764) TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too long time(around 10 min.)

2014-03-15 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10764:
---

Status: Patch Available  (was: Open)

> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too 
> long time(around 10 min.)
> 
>
> Key: HBASE-10764
> URL: https://issues.apache.org/jira/browse/HBASE-10764
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.99.0, 0.98.2
>
> Attachments: HBASE-10764.patch
>
>
> While running 
> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure found its 
> taking longer time. Checked in the builds also. Its taking too long time to 
> complete.
> {code}
> testBulkLoadPhaseFailure  9 min 14 secPassed
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10764) TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too long time(around 10 min.)

2014-03-15 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10764:
---

Attachment: HBASE-10764.patch

Here is the patch for trunk. Just reduced retry count to 2 on bulkload failure.
Its taking few seconds only.

> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too 
> long time(around 10 min.)
> 
>
> Key: HBASE-10764
> URL: https://issues.apache.org/jira/browse/HBASE-10764
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.99.0, 0.98.2
>
> Attachments: HBASE-10764.patch
>
>
> While running 
> TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure found its 
> taking longer time. Checked in the builds also. Its taking too long time to 
> complete.
> {code}
> testBulkLoadPhaseFailure  9 min 14 secPassed
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10763) Backport HBASE-10549(When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.) to 0.98

2014-03-15 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10763:
---

Attachment: HBASE-10549_98.patch

Here is the patch for 0.98.
Ran 
TestLoadIncrementalHFilesSplitRecovery,TestSecureLoadIncrementalHFilesSplitRecovery
 and some related test cases. They are passing.

> Backport HBASE-10549(When there is a hole, LoadIncrementalHFiles will hang in 
> an infinite loop.) to 0.98
> 
>
> Key: HBASE-10763
> URL: https://issues.apache.org/jira/browse/HBASE-10763
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.2
>
> Attachments: HBASE-10549_98.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10763) Backport HBASE-10549(When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.) to 0.98

2014-03-15 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10763:
---

Status: Patch Available  (was: Open)

> Backport HBASE-10549(When there is a hole, LoadIncrementalHFiles will hang in 
> an infinite loop.) to 0.98
> 
>
> Key: HBASE-10763
> URL: https://issues.apache.org/jira/browse/HBASE-10763
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.2
>
> Attachments: HBASE-10549_98.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10764) TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too long time(around 10 min.)

2014-03-15 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-10764:
--

 Summary: 
TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure taking too long 
time(around 10 min.)
 Key: HBASE-10764
 URL: https://issues.apache.org/jira/browse/HBASE-10764
 Project: HBase
  Issue Type: Test
  Components: test
Reporter: rajeshbabu
Assignee: rajeshbabu
Priority: Minor
 Fix For: 0.96.2, 0.99.0, 0.98.2


While running 
TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure found its 
taking longer time. Checked in the builds also. Its taking too long time to 
complete.
{code}
testBulkLoadPhaseFailure9 min 14 secPassed
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10763) Backport HBASE-10549(When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.) to 0.98

2014-03-15 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-10763:
--

 Summary: Backport HBASE-10549(When there is a hole, 
LoadIncrementalHFiles will hang in an infinite loop.) to 0.98
 Key: HBASE-10763
 URL: https://issues.apache.org/jira/browse/HBASE-10763
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.98.0
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 0.98.2






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-15 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10549:
---

   Resolution: Fixed
Fix Version/s: (was: 0.98.2)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)


HTable construction is not checking whether table exists or not because of 
HBASE-10080 which is committed to trunk and 0.98 so not able to find the 
previous test case failures with the patch. Its not a problem.


0.96 build is successful with the test case. Hence resolving.
I will raise separate jira to backport to 0.98.
I think zombies in 0.98 might not be related to this patch.





> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.99.0, 0.94.18
>
> Attachments: HBASE-10549-0.94, HBASE-10549-addendum.patch, 
> HBASE-10549-trunk-2014-03-13.patch, HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-15 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10549:
---

Assignee: yuanxinen  (was: rajeshbabu)

> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-0.94, HBASE-10549-addendum.patch, 
> HBASE-10549-trunk-2014-03-13.patch, HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10762) clone_snapshot doesn't check for missing namespace

2014-03-15 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13936131#comment-13936131
 ] 

rajeshbabu commented on HBASE-10762:


Nice catch [~mbertozzi]. +1

> clone_snapshot doesn't check for missing namespace
> --
>
> Key: HBASE-10762
> URL: https://issues.apache.org/jira/browse/HBASE-10762
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 0.96.2, 0.99.0, 0.98.2
>
> Attachments: HBASE-10762-v0.patch
>
>
> The NS check is present in HMaster.createTable() but not in restoreSnapshot()



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10549:
---

Hadoop Flags:   (was: Reviewed)
  Status: Patch Available  (was: Reopened)

> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.96.1.1, 0.94.11, 0.98.0
>Reporter: yuanxinen
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-0.94, HBASE-10549-addendum.patch, 
> HBASE-10549-trunk-2014-03-13.patch, HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10549:
---

Attachment: HBASE-10549-addendum.patch

The main reason for the failure is HTable is getting constructed before 
creating table. I will check why trunk code accepting this.

> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-0.94, HBASE-10549-addendum.patch, 
> HBASE-10549-trunk-2014-03-13.patch, HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-14 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935942#comment-13935942
 ] 

rajeshbabu commented on HBASE-10549:


[~saint@gmail.com]
I am checking the reason for the test failure.

> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-0.94, HBASE-10549-trunk-2014-03-13.patch, 
> HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-14 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10549:
---

   Resolution: Fixed
Fix Version/s: (was: 0.98.2)
   (was: 0.94.19)
   0.94.18
   0.98.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed the patch to all versions.
Thanks for the patch [~Auraro].
Thank you all for reviews.


> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: HBASE-10549-0.94, HBASE-10549-trunk-2014-03-13.patch, 
> HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7847) Use zookeeper multi to clear znodes

2014-03-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13934639#comment-13934639
 ] 

rajeshbabu commented on HBASE-7847:
---

[~rakeshr]
The test case failures also related to this patch. Please handle them in next 
patch.

> Use zookeeper multi to clear znodes
> ---
>
> Key: HBASE-7847
> URL: https://issues.apache.org/jira/browse/HBASE-7847
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
> Attachments: 7847-v1.txt, HBASE-7847.patch, HBASE-7847.patch, 
> HBASE-7847.patch
>
>
> In ZKProcedureUtil, clearChildZNodes() and clearZNodes(String procedureName) 
> should utilize zookeeper multi so that they're atomic



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1391#comment-1391
 ] 

rajeshbabu commented on HBASE-10549:


I will commit it tomorrow If no objections.

> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-trunk-2014-03-13.patch, 
> HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7847) Use zookeeper multi to clear znodes

2014-03-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13933328#comment-13933328
 ] 

rajeshbabu commented on HBASE-7847:
---

Latest patch lgtm. Nice tests [~rakeshr].

> Use zookeeper multi to clear znodes
> ---
>
> Key: HBASE-7847
> URL: https://issues.apache.org/jira/browse/HBASE-7847
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
> Attachments: 7847-v1.txt, HBASE-7847.patch, HBASE-7847.patch, 
> HBASE-7847.patch
>
>
> In ZKProcedureUtil, clearChildZNodes() and clearZNodes(String procedureName) 
> should utilize zookeeper multi so that they're atomic



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-13 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13933166#comment-13933166
 ] 

rajeshbabu commented on HBASE-10549:


+1

The javadoc warnings are not related to this issue(Raised HBASE-10736 to fix 
them).
[~Auraro] Please make patch for 0.94 also.
Thanks for working on this.



> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-trunk-2014-03-13.patch, 
> HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10736) Fix Javadoc warnings introduced in HBASE-10169

2014-03-13 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-10736:
--

 Summary: Fix Javadoc warnings introduced in HBASE-10169
 Key: HBASE-10736
 URL: https://issues.apache.org/jira/browse/HBASE-10736
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: rajeshbabu
 Fix For: 0.98.1, 0.99.0


Fix below two javadoc warnings introduced as part of HBASE-10169
{code}

2 warnings

[WARNING] Javadoc Warnings

[WARNING] 
D:\D\hbaseCommunity\hbaseTrunkTest\hbase-client\src\main\java\org\apache\hadoop\hbase\client\RegionCoprocessorServiceExec.java:43:
 warning - Tag @link: can't find batchCoprocessorService(MethodDescriptor, 
Message, byte[], byte[],

[WARNING] Message, Batch.Callback) in org.apache.hadoop.hbase.client.HTable

[WARNING] 
D:\D\hbaseCommunity\hbaseTrunkTest\hbase-client\src\main\java\org\apache\hadoop\hbase\client\RegionCoprocessorServiceExec.java:43:
 warning - Tag @link: can't find batchCoprocessorService(MethodDescriptor, 
Message, byte[], byte[], Message) in org.apache.hadoop.hbase.client.HTable
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-13 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10549:
---

Fix Version/s: 0.98.2
   0.94.18
   0.99.0
   0.96.2

> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.94.11, 0.96.1.1
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.99.0, 0.94.18, 0.98.2
>
> Attachments: HBASE-10549-trunk-2014-03-13.patch, 
> HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8076) add better doc for HBaseAdmin#offline API.

2014-03-08 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-8076:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to 0.96,0.98 and trunk.
Thanks for review Stack.

> add better doc for HBaseAdmin#offline API.
> --
>
> Key: HBASE-8076
> URL: https://issues.apache.org/jira/browse/HBASE-8076
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-8076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-9636) HBase shell/client 'scan table' operation is getting failed inbetween the when the regions are shifted from one Region Server to another Region Server

2014-03-06 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu resolved HBASE-9636.
---

Resolution: Not A Problem

[~shankarlingayya]
 This is as expected behavior only.
 As for the logs you have shared, the region server holding row17-row18 range 
is went down at 18:20:58
 {code}
 Fri Sep 20 18:20:58 IST 2013, 
org.apache.hadoop.hbase.client.ScannerCallable@1999dc4f, 
java.net.ConnectException: Connection refused
 Fri Sep 20 18:20:59 IST 2013, 
org.apache.hadoop.hbase.client.ScannerCallable@1999dc4f, 
org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
in the failed servers list: HOST-10-18-40-172/10.18.40.172:61020
 {code}
 
 After that the regions within the range took more time to assign, because the 
HOST-10-18-40-172 holds many regions and the need to assigned one by one after 
shutdown.
 From the logs we can observe this. Means the META table holds the old region 
server address and on each exception we will clear the cache and read from 
META. But meta also holds HOST-10-18-40-172 so scan failed after 7 retries.
 
 {code}
2013-09-20 18:21:33,539 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open 
region: t1,row170593,1379679042365.1ad0997453c665bb9707907be08980fa.
 
2013-09-20 18:21:33,551 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
regionserver:61020-0x1413b3594140079-0x1413b3594140079-0x1413b3594140079-0x1413b3594140079
 Attempting to transition node 1ad0997453c665bb9707907be08980fa from 
M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
 
2013-09-20 18:21:33,557 DEBUG org.apache.hadoop.hbase.regionserver.Store: 
loaded 
hdfs://10.18.40.153:8020/hbase/t1/c18a2bbd6ef4b53f480b53207a68c44e/cf1/04b0a1c45c9f498ebbfd4f8909e693a4,
 isReference=false, isBulkL
 {code}
 There are some configurations which should be tuned to avoid such kind of 
issues.
 1) increase retry count(hbase.client.retries.number - default 7 from shell and 
10 from client)
 2) increase pause time for each retry(hbase.client.pause - default 1 sec)

>  HBase shell/client 'scan table' operation is getting failed inbetween the 
> when the regions are shifted from one Region Server to another Region Server 
> 
>
> Key: HBASE-9636
> URL: https://issues.apache.org/jira/browse/HBASE-9636
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>Assignee: rajeshbabu
>
> {noformat}
> Problem:
> HBase shell/client 'scan table' operation is getting failed inbetween the 
> when the regions are shifted from one Region Server to another Region Server
> When the table regions data moved from one Region Server to another Region 
> Server then the client/shell should be able to handle the data from the 
> new Region server automatically (because when we have huge data in terms of 
> GB/TB at that time one of the Region Server going down in the cluster is 
> frequent)
> Procedure:
> 1. Setup Non HA Hadoop Cluster with two nodes (Node1-XX.XX.XX.XX,  
> Node2-YY.YY.YY.YY)
> 2. Install Zookeeper, HMaster & HRegionServer in Node-1
> 3. Install HRegionServer in Node-2
> 4. From Node2 create HBase Table ( table name 't1' with one column family 
> 'cf1' )
> 5. add around 367120 rows to the table
> 6. scan the table 't1' using hbase shell & at the same time switch the region 
> server 1 & 2 (so that the table 't1' regions data are moved from Region 
> Server 1 to 1 & vice versa)
> 7. During this time hbase shell is getting failed in between of the scan 
> operation as below
> ...   
>  
>  row172266column=cf1:a, timestamp=1379680737307, 
> value=100  
>  row172267column=cf1:a, timestamp=1379680737311, 
> value=100  
>  row172268column=cf1:a, timestamp=1379680737314, 
> value=100  
>  row172269column=cf1:a, timestamp=1379680737317, 
> value=100  
>  row17227 column=cf1:a, timestamp=1379679668631, 
> value=100  
>  row17227 column=cf1:b, timestamp=1379681090560, 
> value=200 
> ERROR: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=7, exceptions:
> Fri Sep 20 18:20:58 IST 2013, 
> org.apache.hadoop.hbase.cli

[jira] [Updated] (HBASE-8076) add better doc for HBaseAdmin#offline API.

2014-03-06 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-8076:
--

Fix Version/s: 0.99.0
   0.98.1
   0.96.2
   Status: Patch Available  (was: Open)

> add better doc for HBaseAdmin#offline API.
> --
>
> Key: HBASE-8076
> URL: https://issues.apache.org/jira/browse/HBASE-8076
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Reporter: rajeshbabu
>Assignee: rajeshbabu
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-8076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10549) When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.

2014-03-06 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13922170#comment-13922170
 ] 

rajeshbabu commented on HBASE-10549:


{code}
String testName = "groupOrSplitWhenHoleExistInMeta";
{code}
Better to change as testGroupOrSplitWhenRegionHoleExistsInMeta

{code}
this.deleteMetaInfo(testName, Bytes.toBytes("20"));
{code}
Instead of rewriting deleting entry from meta, you can use 
MetaEditor#deleteRegions

{code}
util.waitTableEnabled(TABLE);
{code}
HBaseAdmin#createTable comes out when table is enabled, so this check is not 
required.







> When there is a hole, LoadIncrementalHFiles will hang in an infinite loop.
> --
>
> Key: HBASE-10549
> URL: https://issues.apache.org/jira/browse/HBASE-10549
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.11
>Reporter: yuanxinen
>Assignee: yuanxinen
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: HBASE-10549-trunk.patch
>
>
> First,I explan my test steps.
> 1.importtsv
> 2.split the region
> 3.delete the region info from .META.(make a hole)
> 4.LoadIncrementalHFiles (this step will hung up in an infinite loop)
> I check the log,there are two issues
> 1.it create _tmp folder in an infinite loop.
> hdfs://hacluster/output3/i/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/test_table,136.bottom
> 2.when slpliting the hfile,it put the first line data(1211) into two 
> files(top and bottom)
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.top,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> Input 
> File=hdfs://hacluster/output3/i/3ac6ec287c644a8fb72d96b13e31f576,outFile=hdfs://hacluster/output3/i/_tmp/test_table,2.bottom,KeyValue=1211/i:value/1390469306407/Put/vlen=1/ts=0
> and then I check the code.
> So I think before spliting the hfile,we should check the consistency of 
> startkey and endkey,if something wrong,we should throw the exception,and stop 
> LoadIncrementalHFiles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-05 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13921166#comment-13921166
 ] 

rajeshbabu commented on HBASE-10669:


I am not able to assign this issue to [~deepakhuawei].
Can any one assign to him?
Thanks.

> [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
> --
>
> Key: HBASE-10669
> URL: https://issues.apache.org/jira/browse/HBASE-10669
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.11, 0.96.0
>Reporter: Deepak Sharma
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: HBASE_10669_94.11.patch, Hbck_usage_issue.patch
>
>
> Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
> it is like:
> -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
> -checkCorruptHfiles
> here in "sidelineCorruptHfiles" and "checkCorruptHfiles" small 'f' is used 
> but actually in code it is like 
>   else if (cmd.equals("-checkCorruptHFiles")) {
> checkCorruptHFiles = true;
>   } else if (cmd.equals("-sidelineCorruptHFiles")) {
> sidelineCorruptHFiles = true;
>   }
> so if we use sidelineCorruptHfiles option for hbck then it will give error 
> Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-05 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10669:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to all versions.
Thanks for patch [~deepakhuawei]. Nice catch.
Next time on wards create patch from HBase root directory.



> [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
> --
>
> Key: HBASE-10669
> URL: https://issues.apache.org/jira/browse/HBASE-10669
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.11, 0.96.0
>Reporter: Deepak Sharma
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: HBASE_10669_94.11.patch, Hbck_usage_issue.patch
>
>
> Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
> it is like:
> -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
> -checkCorruptHfiles
> here in "sidelineCorruptHfiles" and "checkCorruptHfiles" small 'f' is used 
> but actually in code it is like 
>   else if (cmd.equals("-checkCorruptHFiles")) {
> checkCorruptHFiles = true;
>   } else if (cmd.equals("-sidelineCorruptHFiles")) {
> sidelineCorruptHFiles = true;
>   }
> so if we use sidelineCorruptHfiles option for hbck then it will give error 
> Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10677) boundaries check in hbck throwing IllegalArgumentException

2014-03-05 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10677:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to 0.96, 0.98 and trunk.
Thanks for review [~jmhsieh], [~jmspaggi]

> boundaries check in hbck throwing IllegalArgumentException
> --
>
> Key: HBASE-10677
> URL: https://issues.apache.org/jira/browse/HBASE-10677
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10677.patch, HBASE-10677_v1.patch, 
> HBASE-10677_v2.patch
>
>
> This is the exception 
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e from 
> hdfs://10.18.40.29:54310/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e 
> is not a valid DFS filename.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:637)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:537)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:487)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4028)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3837)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3825)
> {code}
> We are pointing to wrong table directory. This is the code causing the 
> problem.
> {code}
> // For each region, get the start and stop key from the META and 
> compare them to the
> // same information from the Stores.
> Path path = new Path(getConf().get(HConstants.HBASE_DIR) + "/"
> + Bytes.toString(regionInfo.getTable().getName()) + "/"
> + regionInfo.getEncodedName() + "/");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10677) boundaries check in hbck throwing IllegalArgumentException

2014-03-05 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10677:
---

Attachment: HBASE-10677_v2.patch

Added test case.

> boundaries check in hbck throwing IllegalArgumentException
> --
>
> Key: HBASE-10677
> URL: https://issues.apache.org/jira/browse/HBASE-10677
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10677.patch, HBASE-10677_v1.patch, 
> HBASE-10677_v2.patch
>
>
> This is the exception 
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e from 
> hdfs://10.18.40.29:54310/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e 
> is not a valid DFS filename.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:637)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:537)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:487)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4028)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3837)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3825)
> {code}
> We are pointing to wrong table directory. This is the code causing the 
> problem.
> {code}
> // For each region, get the start and stop key from the META and 
> compare them to the
> // same information from the Stores.
> Path path = new Path(getConf().get(HConstants.HBASE_DIR) + "/"
> + Bytes.toString(regionInfo.getTable().getName()) + "/"
> + regionInfo.getEncodedName() + "/");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10677) boundaries check in hbck throwing IllegalArgumentException

2014-03-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10677:
---

Attachment: HBASE-10677_v1.patch

Earlier attached wrong patch.
This is correct patch.

> boundaries check in hbck throwing IllegalArgumentException
> --
>
> Key: HBASE-10677
> URL: https://issues.apache.org/jira/browse/HBASE-10677
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10677.patch, HBASE-10677_v1.patch
>
>
> This is the exception 
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e from 
> hdfs://10.18.40.29:54310/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e 
> is not a valid DFS filename.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:637)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:537)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:487)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4028)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3837)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3825)
> {code}
> We are pointing to wrong table directory. This is the code causing the 
> problem.
> {code}
> // For each region, get the start and stop key from the META and 
> compare them to the
> // same information from the Stores.
> Path path = new Path(getConf().get(HConstants.HBASE_DIR) + "/"
> + Bytes.toString(regionInfo.getTable().getName()) + "/"
> + regionInfo.getEncodedName() + "/");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10677) boundaries check in hbck throwing IllegalArgumentException

2014-03-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10677:
---

Attachment: HBASE-10677.patch

Patch for trunk.

> boundaries check in hbck throwing IllegalArgumentException
> --
>
> Key: HBASE-10677
> URL: https://issues.apache.org/jira/browse/HBASE-10677
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10677.patch
>
>
> This is the exception 
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e from 
> hdfs://10.18.40.29:54310/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e 
> is not a valid DFS filename.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:637)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:537)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:487)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4028)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3837)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3825)
> {code}
> We are pointing to wrong table directory. This is the code causing the 
> problem.
> {code}
> // For each region, get the start and stop key from the META and 
> compare them to the
> // same information from the Stores.
> Path path = new Path(getConf().get(HConstants.HBASE_DIR) + "/"
> + Bytes.toString(regionInfo.getTable().getName()) + "/"
> + regionInfo.getEncodedName() + "/");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10677) boundaries check in hbck throwing IllegalArgumentException

2014-03-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10677:
---

Status: Patch Available  (was: Open)

> boundaries check in hbck throwing IllegalArgumentException
> --
>
> Key: HBASE-10677
> URL: https://issues.apache.org/jira/browse/HBASE-10677
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10677.patch
>
>
> This is the exception 
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e from 
> hdfs://10.18.40.29:54310/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e 
> is not a valid DFS filename.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:637)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:537)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:487)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4028)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3837)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3825)
> {code}
> We are pointing to wrong table directory. This is the code causing the 
> problem.
> {code}
> // For each region, get the start and stop key from the META and 
> compare them to the
> // same information from the Stores.
> Path path = new Path(getConf().get(HConstants.HBASE_DIR) + "/"
> + Bytes.toString(regionInfo.getTable().getName()) + "/"
> + regionInfo.getEncodedName() + "/");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10677) boundaries check in hbck throwing IllegalArgumentException

2014-03-04 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-10677:
--

 Summary: boundaries check in hbck throwing IllegalArgumentException
 Key: HBASE-10677
 URL: https://issues.apache.org/jira/browse/HBASE-10677
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.98.0
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 0.96.2, 0.98.1, 0.99.0


This is the exception 
{code}
java.lang.IllegalArgumentException: Pathname 
/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e from 
hdfs://10.18.40.29:54310/hbase/hbase:labels/438f84dea5d0e24e390de927deb8a84e is 
not a valid DFS filename.
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:637)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
at 
org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:537)
at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:487)
at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4028)
at 
org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3837)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3825)
{code}

We are pointing to wrong table directory. This is the code causing the problem.
{code}
// For each region, get the start and stop key from the META and 
compare them to the
// same information from the Stores.
Path path = new Path(getConf().get(HConstants.HBASE_DIR) + "/"
+ Bytes.toString(regionInfo.getTable().getName()) + "/"
+ regionInfo.getEncodedName() + "/");
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13920546#comment-13920546
 ] 

rajeshbabu commented on HBASE-10669:


+1

> [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
> --
>
> Key: HBASE-10669
> URL: https://issues.apache.org/jira/browse/HBASE-10669
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.11, 0.96.0
>Reporter: Deepak Sharma
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: Hbck_usage_issue.patch
>
>
> Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
> it is like:
> -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
> -checkCorruptHfiles
> here in "sidelineCorruptHfiles" and "checkCorruptHfiles" small 'f' is used 
> but actually in code it is like 
>   else if (cmd.equals("-checkCorruptHFiles")) {
> checkCorruptHFiles = true;
>   } else if (cmd.equals("-sidelineCorruptHFiles")) {
> sidelineCorruptHFiles = true;
>   }
> so if we use sidelineCorruptHfiles option for hbck then it will give error 
> Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10669:
---

Affects Version/s: (was: 0.96.1.1)
   (was: 0.99.0)
   (was: 0.98.1)
   (was: 0.96.2)
   0.94.11
   0.96.0

> [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
> --
>
> Key: HBASE-10669
> URL: https://issues.apache.org/jira/browse/HBASE-10669
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.11, 0.96.0
>Reporter: Deepak Sharma
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: Hbck_usage_issue.patch
>
>
> Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
> it is like:
> -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
> -checkCorruptHfiles
> here in "sidelineCorruptHfiles" and "checkCorruptHfiles" small 'f' is used 
> but actually in code it is like 
>   else if (cmd.equals("-checkCorruptHFiles")) {
> checkCorruptHFiles = true;
>   } else if (cmd.equals("-sidelineCorruptHFiles")) {
> sidelineCorruptHFiles = true;
>   }
> so if we use sidelineCorruptHfiles option for hbck then it will give error 
> Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10669) [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option

2014-03-04 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-10669:
---

Fix Version/s: 0.94.18

> [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option
> --
>
> Key: HBASE-10669
> URL: https://issues.apache.org/jira/browse/HBASE-10669
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.96.2, 0.98.1, 0.99.0, 0.96.1.1
>Reporter: Deepak Sharma
>Priority: Minor
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: Hbck_usage_issue.patch
>
>
> Usage is wrong for hbck tool for -sidelineCorruptHfiles option: 
> it is like:
> -sidelineCorruptHfiles  Quarantine corrupted HFiles.  implies 
> -checkCorruptHfiles
> here in "sidelineCorruptHfiles" and "checkCorruptHfiles" small 'f' is used 
> but actually in code it is like 
>   else if (cmd.equals("-checkCorruptHFiles")) {
> checkCorruptHFiles = true;
>   } else if (cmd.equals("-sidelineCorruptHFiles")) {
> sidelineCorruptHFiles = true;
>   }
> so if we use sidelineCorruptHfiles option for hbck then it will give error 
> Unrecognized option:-sidelineCorruptHfiles



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   4   5   6   7   8   9   10   >