[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-07-20 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633078#comment-14633078
 ] 

Francis Liu commented on HBASE-6721:


Unless I'm missing something it seems the review is still pending. 

[~jmhsieh] would you still be able to complete the review? :-)

 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: 6721-master-webUI.patch, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721_10.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
 HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
 HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
 HBASE-6721_94_7.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, 
 HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, HBASE-6721_trunk2.patch


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12374:
---
Attachment: HBASE-12374_v3.patch

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12295) Prevent block eviction under us if reads are in progress from the BBs

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12295:
---
Attachment: HBASE-12295_18.patch

Patch for QA.

 Prevent block eviction under us if reads are in progress from the BBs
 -

 Key: HBASE-12295
 URL: https://issues.apache.org/jira/browse/HBASE-12295
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-12295.pdf, HBASE-12295_1.patch, HBASE-12295_1.pdf, 
 HBASE-12295_10.patch, HBASE-12295_12.patch, HBASE-12295_14.patch, 
 HBASE-12295_15.patch, HBASE-12295_16.patch, HBASE-12295_16.patch, 
 HBASE-12295_17.patch, HBASE-12295_18.patch, HBASE-12295_2.patch, 
 HBASE-12295_4.patch, HBASE-12295_4.pdf, HBASE-12295_5.pdf, 
 HBASE-12295_9.patch, HBASE-12295_trunk.patch


 While we try to serve the reads from the BBs directly from the block cache, 
 we need to ensure that the blocks does not get evicted under us while 
 reading.  This JIRA is to discuss and implement a strategy for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14116:
---
Attachment: HBASE-14116_v2.patch

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch, HBASE-14116_v2.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633250#comment-14633250
 ] 

ramkrishna.s.vasudevan commented on HBASE-14116:


+1 for V2.

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch, HBASE-14116_v2.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633388#comment-14633388
 ] 

ramkrishna.s.vasudevan commented on HBASE-14120:


+1. Good find and good point to learn.

 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633398#comment-14633398
 ] 

ramkrishna.s.vasudevan commented on HBASE-13981:


True that it is not getting used. But the idea was to use that 
ATTRIBUTE_SEPERATOR_CONF_KEY as a key to specify the attribute seperator. Now 
the DEFAULT_ATTRIBUTES_SEPERATOR is only used in a sample Custom mapper 
TsvImporterCustomTestMapperForOprAttr. May be it is better to have it and write 
a doc?
{code}
  separator = conf.get(ImportTsv.SEPARATOR_CONF_KEY);
{code}
Similar to this SEPARATOR_CONF_KEY was the idea.

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633220#comment-14633220
 ] 

ramkrishna.s.vasudevan commented on HBASE-13954:


Patch LGTM. Just one question. since you have addressed thrift here? REST API 
also should be changed?

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954-v6.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633224#comment-14633224
 ] 

Ashish Singhi commented on HBASE-13954:
---

I have already already addressed it here, Ram.
It was only used at one place in RemoteHTable where it was throwing exception 
on calling this api.
{code}
678   public Result getRowOrBefore(byte[] row, byte[] family) throws 
IOException {  
679 throw new IOException(getRowOrBefore not supported);  
680   }
{code}
Thanks.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954-v6.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633231#comment-14633231
 ] 

Anoop Sam John commented on HBASE-13954:


Will commit tonight IST unless objection.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954-v6.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12295) Prevent block eviction under us if reads are in progress from the BBs

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12295:
---
Status: Open  (was: Patch Available)

 Prevent block eviction under us if reads are in progress from the BBs
 -

 Key: HBASE-12295
 URL: https://issues.apache.org/jira/browse/HBASE-12295
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-12295.pdf, HBASE-12295_1.patch, HBASE-12295_1.pdf, 
 HBASE-12295_10.patch, HBASE-12295_12.patch, HBASE-12295_14.patch, 
 HBASE-12295_15.patch, HBASE-12295_16.patch, HBASE-12295_16.patch, 
 HBASE-12295_17.patch, HBASE-12295_2.patch, HBASE-12295_4.patch, 
 HBASE-12295_4.pdf, HBASE-12295_5.pdf, HBASE-12295_9.patch, 
 HBASE-12295_trunk.patch


 While we try to serve the reads from the BBs directly from the block cache, 
 we need to ensure that the blocks does not get evicted under us while 
 reading.  This JIRA is to discuss and implement a strategy for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14122) Client API for determining if server side supports cell level security

2015-07-20 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633381#comment-14633381
 ] 

Jingcheng Du commented on HBASE-14122:
--

Hi, can I take it?

 Client API for determining if server side supports cell level security
 --

 Key: HBASE-14122
 URL: https://issues.apache.org/jira/browse/HBASE-14122
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0


 Add a client API for determining if the server side supports cell level 
 security. 
 Ask the master, assuming as we do in many other instances that the master and 
 regionservers all have a consistent view of site configuration.
 Return {{true}} if all features required for cell level security are present, 
 {{false}} otherwise, or throw {{UnsupportedOperationException}} if the master 
 does not have support for the RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13954) Remove HTableInterface#getRowOrBefore related server side code

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633233#comment-14633233
 ] 

ramkrishna.s.vasudevan commented on HBASE-13954:


Okie. I see that now. +1.

 Remove HTableInterface#getRowOrBefore related server side code
 --

 Key: HBASE-13954
 URL: https://issues.apache.org/jira/browse/HBASE-13954
 Project: HBase
  Issue Type: Sub-task
  Components: API
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13954(1).patch, HBASE-13954-v1.patch, 
 HBASE-13954-v2.patch, HBASE-13954-v3.patch, HBASE-13954-v4.patch, 
 HBASE-13954-v5.patch, HBASE-13954-v6.patch, HBASE-13954.patch


 As part of HBASE-13214 review, [~anoop.hbase] had a review comment on the 
 review board to remove all the server side related code for getRowOrBefore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Open  (was: Patch Available)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Patch Available  (was: Open)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14099:
---
Component/s: Scanners
 Performance

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14099:
---
Priority: Major  (was: Minor)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633446#comment-14633446
 ] 

Anoop Sam John commented on HBASE-14099:


{code}
|| compareLastKey(lastKeyKV, smallestScanRow)  0;
return !nonOverLapping;
}

protected int compareLastKey(Cell lastKeyKV, byte[] scanRow) {
  int diff = getComparator().compareRows(lastKeyKV, scanRow, 0, 
scanRow.length);
  if (diff != 0) {
   return diff;
  }
  // if the rows does not match then atleast the family or qualifier will 
make the
  // lastKeyKV greater.  There should not be a need to compare the ts
  return 1;
}

{code}

Do you really need this extra logic for the compare? Now we are only comparing 
the rks. Only when the smallestScanRow is greater than the lastKeyKV we have to 
set the boolean. Means when both are same (return of compare is 0) we dont want 
to set it. And that is what the simple compare op also will do.
So getComparator().compareRows(lastKeyKV, smallestScanRow, 0, 
smallestScanRow.length)0; whould be enough

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Attachment: storefile.png
HBASE-14099_1.patch

Just refactored it as suggested. Also the KVUtil.createLastOnRow was creating a 
KV with LATEST_TIMESTAMP. It should be OLDEST_TIMESTAMP.
Also attaching a screen shot of a profiler that shows this KV creation that 
happens by copying the byte[] before the patch.  We reduce two object creations 
including its copy. 
Thanks for the review.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633474#comment-14633474
 ] 

Ashish Singhi commented on HBASE-13981:
---

Sounds good to me too.

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14120:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks for the review Ram.

 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633437#comment-14633437
 ] 

Anoop Sam John commented on HBASE-14120:


I don't think test issues are related to this patch.  Will commit now..

When the method was first implemented, I forgot apply this change in if 
condition check in the patch.  I had done that change in my JMH env at the time 
of perf experiments.  My bad.. again noticed it is this way after some days.

 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633442#comment-14633442
 ] 

Anoop Sam John commented on HBASE-13981:


+1 to move the Key and default value constants to there in that new mapper.  
BTW in the new mapper we just use the default value only. We are not reading 
the conf and using the specified value.  That also has to be corrected.   To be 
on safe side, we will just deprecate this conf here.  New one to be added in 
TsvImporterCustomTestMapperForOprAttr (also the default value to be moved there)

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14099:
---
Issue Type: Improvement  (was: Bug)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633485#comment-14633485
 ] 

Hadoop QA commented on HBASE-12374:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746056/HBASE-12374_v3.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12746056

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14834//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14834//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14834//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14834//console

This message is automatically generated.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633465#comment-14633465
 ] 

ramkrishna.s.vasudevan commented on HBASE-14099:


Let me check if I can reduce the createFirstDeleteFamilyOnRow also.  I forgot 
about that.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14116) Change ByteBuff.getXXXStrictlyForward to relative position based reads

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633495#comment-14633495
 ] 

Hadoop QA commented on HBASE-14116:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746058/HBASE-14116_v2.patch
  against master branch at commit 338e39970ba8e4835733669b9252d073b2157b8a.
  ATTACHMENT ID: 12746058

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14833//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14833//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14833//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14833//console

This message is automatically generated.

 Change ByteBuff.getXXXStrictlyForward to relative position based reads
 --

 Key: HBASE-14116
 URL: https://issues.apache.org/jira/browse/HBASE-14116
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14116.patch, HBASE-14116_v2.patch


 There is a TODO added in ByteBuff.getXXXStrictlyForward to a positional based 
 read from the current position. This could help in avoiding the initial check 
 that is added in the API to ensure that the passed in index is always greater 
 than the current position.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633563#comment-14633563
 ] 

Anoop Sam John commented on HBASE-14099:


+1. 

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Attachment: HBASE-14099_2.patch

Updated patch that handles the createFirstOnRowDeleteFamily also. Now we create 
a cell object without copying by using the FakeCell concept. 

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Patch Available  (was: Open)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Attachment: HBASE-14099_3.patch

Updated patch. Renamed to createFirstDeleteFamilyCellOnRow.  Also removed 
KVUtil.createFirstDeleteFamilyOnRow() as per suggestion by Anoop.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Patch Available  (was: Open)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633580#comment-14633580
 ] 

ramkrishna.s.vasudevan commented on HBASE-12374:


I checked the patch end to end. LGTM.
Great work.
Can we rename the getKeyValue to getCell as part of this patch only.  I think I 
did not do in the DBE case.  

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633583#comment-14633583
 ] 

ramkrishna.s.vasudevan commented on HBASE-12374:


One query,
Will it be  better to copy the tags part incase of tagCompressionContext == 
null  includeTags case - using a seperate asSubByteBuffer call rather than 
copying them in one call? 

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633595#comment-14633595
 ] 

ramkrishna.s.vasudevan commented on HBASE-14099:


Will commit after QA run.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633597#comment-14633597
 ] 

Anoop Sam John commented on HBASE-12374:


Ya I can do the rename.

bq.Will it be better to copy the tags part incase of tagCompressionContext == 
null  includeTags case - using a seperate asSubByteBuffer call rather than 
copying them in one call?
My thinking here was, mostly there wont be copy need for this asSubBuffer call. 
 Even if copy is needed, we will do it is one call and make one object 
including both tags and value part (tags here is not compressed at all).  The 
only case where this is an overhead (one call rather than 2 calls for value and 
tags) the value part ends exactly at one item BB and tags part starts in next 
BB.  Then one call for value and tags will need a copy where as the other model 
dont need at all.  Very rare chance.  
2 calls to asSubBuffer having overhead of calculations in MBB like finding the 
position and item BB.  IMO current way is ok.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Open  (was: Patch Available)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Status: Open  (was: Patch Available)

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633537#comment-14633537
 ] 

Lars Hofhansl commented on HBASE-13992:
---

super minor nit: it's bet to have an attached patch end in .txt (which I 
prefer) or .patch, so that browser can open it directly. :)

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633547#comment-14633547
 ] 

Lars Hofhansl commented on HBASE-13992:
---

From the example:
{code}
ListTuple2byte[], ListTuple3byte[], byte[], byte[] results = 
javaRdd.collect();
{code}

Still does map byte[] - Result, and so will be different from the HadoopRDD 
would do. (unless I am missing something)


 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633630#comment-14633630
 ] 

Ted Malaska commented on HBASE-13992:
-

[~lhofhansl] My bad.  That shouldn't be to hard to fix.  I will have that in 
the next patch.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12465) HBase master start fails due to incorrect file creations

2015-07-20 Thread Sumit Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633621#comment-14633621
 ] 

Sumit Nigam commented on HBASE-12465:
-

I am using Hbase version HBase 0.98.6 and have the same issue.

 HBase master start fails due to incorrect file creations
 

 Key: HBASE-12465
 URL: https://issues.apache.org/jira/browse/HBASE-12465
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
 Environment: Ubuntu
Reporter: Biju Nair
Assignee: Alicia Ying Shu
  Labels: hbase, hbase-bulkload

 - Start of HBase master fails due to the following error found in the log.
 2014-11-11 20:25:58,860 WARN org.apache.hadoop.hbase.backup.HFileArchiver: 
 Failed to archive class 
 org.apache.hadoop.hbase.backup.HFileArchiver$FileablePa
 th,file:hdfs:///hbase/.tmp/data/default/tbl/00820520f5cb7839395e83f40c8d97c2/e/52bf9eee7a27460c8d9e2a26fa43c918_SeqId_282271246_
  on try #1
 org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=hbase,access=WRITE,inode=/hbase/.tmp/data/default/tbl/00820520f5cb7839395e83f40c8d97c2/e/52bf9eee7a27460c8d9e2a26fa43c918_SeqId_282271246_:devuser:supergroup:-rwxr-xr-x
 -  All the files that hbase master was complaining about are created under an 
 users user-id instead on hbase user resulting in incorrect access 
 permission for the master to act on.
 - Looks like this was due to bulk load done using LoadIncrementalHFiles 
 program.
 - HBASE-12052 is another scenario similar to this one. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-07-20 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633661#comment-14633661
 ] 

Jonathan Hsieh commented on HBASE-6721:
---

Sorry, I didn't realize I was blocking.  I can't commit to doing a review in 
the short term -- please proceed without me.  Let me suggest that we commit 
what we have with a cursory review to a branch and then make progress there.



 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: 6721-master-webUI.patch, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721_10.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
 HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
 HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
 HBASE-6721_94_7.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, 
 HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, HBASE-6721_trunk2.patch


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633666#comment-14633666
 ] 

Hadoop QA commented on HBASE-14099:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746092/HBASE-14099_2.patch
  against master branch at commit 88038cf473a17a6b059902e080f1bc10d338b7c9.
  ATTACHMENT ID: 12746092

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testStoreFileCacheOnWriteInternals(TestCacheOnWrite.java:270)
at 
org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testStoreFileCacheOnWrite(TestCacheOnWrite.java:473)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14836//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14836//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14836//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14836//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14836//console

This message is automatically generated.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633668#comment-14633668
 ] 

Hadoop QA commented on HBASE-14099:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746086/HBASE-14099_1.patch
  against master branch at commit 88038cf473a17a6b059902e080f1bc10d338b7c9.
  ATTACHMENT ID: 12746086

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14835//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14835//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14835//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14835//console

This message is automatically generated.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14120) ByteBufferUtils#compareTo small optimization

2015-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633670#comment-14633670
 ] 

Hudson commented on HBASE-14120:


FAILURE: Integrated in HBase-TRUNK #6664 (See 
[https://builds.apache.org/job/HBase-TRUNK/6664/])
HBASE-14120 ByteBufferUtils#compareTo small optimization. (anoopsamjohn: rev 
88038cf473a17a6b059902e080f1bc10d338b7c9)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java


 ByteBufferUtils#compareTo small optimization
 

 Key: HBASE-14120
 URL: https://issues.apache.org/jira/browse/HBASE-14120
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-14120.patch


 We have it like
 {code}
 if (UnsafeAccess.isAvailable()) {
   long offset1Adj, offset2Adj;
   Object refObj1 = null, refObj2 = null;
   if (buf1.hasArray()) {
   offset1Adj = o1 + buf1.arrayOffset() + 
 UnsafeAccess.BYTE_ARRAY_BASE_OFFSET;
   refObj1 = buf1.array();
   } else {
   offset1Adj = o1 + ((DirectBuffer) buf1).address();
   }
   if (buf2.hasArray()) {
 {code}
 Instead of hasArray() check we can have isDirect() check and reverse the if 
 else block. Because we will be making BB backed cells when it is offheap BB. 
 So when code reaches here for comparison, it will be direct BB.
 Doing JMH test proves it.
 {code}
 BenchmarkMode  Cnt Score Error  
 Units
 OnHeapVsOffHeapComparer.offheap thrpt4  50516432.643 ±  651828.103  
 ops/s
 OnHeapVsOffHeapComparer.offheapOld  thrpt4  37696698.093 ± 1121685.293  
 ops/s
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12374:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks Ram.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633905#comment-14633905
 ] 

Anoop Sam John commented on HBASE-12374:


bq.TEST-org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS.xml.init
Test case failure is not related.


 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13788) Shell comands do not support column qualifiers containing colon (:)

2015-07-20 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633769#comment-14633769
 ] 

Pankaj Kumar commented on HBASE-13788:
--

[~bpshuai], Here custom formatting is only for Shell client, database won't 
have any info about qualifier's converter (Please correct me if I'm wrong). 
Values will be stored as it is in HBase. 

User can format value column wise by adding it to the column name in 'scan' or 
'get' specification.

{noformat}
hbase scan 't1', {COLUMNS = ['cf:qualifier1:toInt',
   'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } 
{noformat}

The only concern is hbase shell client changes.

 Shell comands do not support column qualifiers containing colon (:)
 ---

 Key: HBASE-13788
 URL: https://issues.apache.org/jira/browse/HBASE-13788
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
Reporter: Dave Latham
Assignee: Pankaj Kumar

 The shell interprets the colon within the qualifier as a delimiter to a 
 FORMATTER instead of part of the qualifier itself.
 Example from the mailing list:
 Hmph, I may have spoken too soon. I know I tested this at one point and
 it worked, but now I'm getting different results:
 On the new cluster, I created a duplicate test table:
 hbase(main):043:0 create 'content3', {NAME = 'x', BLOOMFILTER =
 'NONE', REPLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION =
 'NONE', MIN_VERSIONS = '0', TTL = '2147483647', BLOCKSIZE = '65536',
 IN_MEMORY = 'false', BLOCKCACHE = 'true'}
 Then I pull some data from the imported table:
 hbase(main):045:0 scan 'content', {LIMIT=1,
 STARTROW='A:9223370612089311807:twtr:57013379'}
 ROW  COLUMN+CELL
 
 A:9223370612089311807:twtr:570133798827921408
 column=x:twitter:username, timestamp=1424775595345, value=BERITA 
 INFORMASI!
 Then put it:
 hbase(main):046:0 put
 'content3','A:9223370612089311807:twtr:570133798827921408',
 'x:twitter:username', 'BERITA  INFORMASI!'
 But then when I query it, I see that I've lost the column qualifier
 :username:
 hbase(main):046:0 scan 'content3'
 ROW  COLUMN+CELL
  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
  timestamp=1432745301788, value=BERITA  INFORMASI!
 Even though I'm missing one of the qualifiers, I can at least filter on
 columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633832#comment-14633832
 ] 

ramkrishna.s.vasudevan commented on HBASE-12374:


Okie. I think it makes sense. 

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13212) Procedure V2 - master Create/Modify/Delete namespace

2015-07-20 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-13212:
---
Attachment: HBASE-13212.v0-master.patch

 Procedure V2 - master Create/Modify/Delete namespace
 

 Key: HBASE-13212
 URL: https://issues.apache.org/jira/browse/HBASE-13212
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
  Labels: reliability
 Attachments: HBASE-13212.v0-master.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 master side, part of HBASE-12439
 starts up the procedure executor on the master
 and replaces the create/modify/delete namespace handlers with the procedure 
 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13212) Procedure V2 - master Create/Modify/Delete namespace

2015-07-20 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633868#comment-14633868
 ] 

Stephen Yuan Jiang commented on HBASE-13212:


The patch is ready to be reviewed in https://reviews.apache.org/r/36619/ (note: 
I removed the generated file in the RB to reduce noise)

 Procedure V2 - master Create/Modify/Delete namespace
 

 Key: HBASE-13212
 URL: https://issues.apache.org/jira/browse/HBASE-13212
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
  Labels: reliability
 Attachments: HBASE-13212.v0-master.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 master side, part of HBASE-12439
 starts up the procedure executor on the master
 and replaces the create/modify/delete namespace handlers with the procedure 
 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14098) Allow dropping caches behind compactions

2015-07-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14098:
--
Attachment: HBASE-14098-v2.patch

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13788) Shell comands do not support column qualifiers containing colon (:)

2015-07-20 Thread Ben Shuai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633922#comment-14633922
 ] 

Ben Shuai commented on HBASE-13788:
---

Pankaj, 
What I suggested is exactly for client only and it has nothing to do with what 
is in database. All user  needs to do is to pick a token that is not used in 
his/her existing HBase qualifiers.  I used that trick for importing data for 
other technology to avoid such conflicts as I encountered the same problem. By 
the way, I looked the code and it seems to be it is feasible.

 Shell comands do not support column qualifiers containing colon (:)
 ---

 Key: HBASE-13788
 URL: https://issues.apache.org/jira/browse/HBASE-13788
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
Reporter: Dave Latham
Assignee: Pankaj Kumar

 The shell interprets the colon within the qualifier as a delimiter to a 
 FORMATTER instead of part of the qualifier itself.
 Example from the mailing list:
 Hmph, I may have spoken too soon. I know I tested this at one point and
 it worked, but now I'm getting different results:
 On the new cluster, I created a duplicate test table:
 hbase(main):043:0 create 'content3', {NAME = 'x', BLOOMFILTER =
 'NONE', REPLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION =
 'NONE', MIN_VERSIONS = '0', TTL = '2147483647', BLOCKSIZE = '65536',
 IN_MEMORY = 'false', BLOCKCACHE = 'true'}
 Then I pull some data from the imported table:
 hbase(main):045:0 scan 'content', {LIMIT=1,
 STARTROW='A:9223370612089311807:twtr:57013379'}
 ROW  COLUMN+CELL
 
 A:9223370612089311807:twtr:570133798827921408
 column=x:twitter:username, timestamp=1424775595345, value=BERITA 
 INFORMASI!
 Then put it:
 hbase(main):046:0 put
 'content3','A:9223370612089311807:twtr:570133798827921408',
 'x:twitter:username', 'BERITA  INFORMASI!'
 But then when I query it, I see that I've lost the column qualifier
 :username:
 hbase(main):046:0 scan 'content3'
 ROW  COLUMN+CELL
  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
  timestamp=1432745301788, value=BERITA  INFORMASI!
 Even though I'm missing one of the qualifiers, I can at least filter on
 columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633949#comment-14633949
 ] 

Hadoop QA commented on HBASE-14099:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746130/HBASE-14099_4.patch
  against master branch at commit 88038cf473a17a6b059902e080f1bc10d338b7c9.
  ATTACHMENT ID: 12746130

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14838//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14838//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14838//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14838//console

This message is automatically generated.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, HBASE-14099_4.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Attachment: HBASE-14099_4.patch

bq.-1 core tests. The patch failed these unit tests:
org.apache.hadoop.hbase.client.TestFastFail

The failure is random I think.  Corrected the javadoc too. Will commit this 
patch now.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, HBASE-14099_4.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633802#comment-14633802
 ] 

Ted Malaska commented on HBASE-13992:
-

[~ted_yu] thanks I will have a new patch hopefully tonight and I will name it 
with .patch at the end.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13212) Procedure V2 - master Create/Modify/Delete namespace

2015-07-20 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-13212:
---
Attachment: (was: HBASE-13212.v0-master.patch)

 Procedure V2 - master Create/Modify/Delete namespace
 

 Key: HBASE-13212
 URL: https://issues.apache.org/jira/browse/HBASE-13212
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
  Labels: reliability
   Original Estimate: 168h
  Remaining Estimate: 168h

 master side, part of HBASE-12439
 starts up the procedure executor on the master
 and replaces the create/modify/delete namespace handlers with the procedure 
 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-14099:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master. Thanks for the reviews Anoop.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, HBASE-14099_4.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633836#comment-14633836
 ] 

Anoop Sam John commented on HBASE-12374:


Thanks Ram.  Will commit in some time.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13788) Shell comands do not support column qualifiers containing colon (:)

2015-07-20 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633792#comment-14633792
 ] 

Dave Latham commented on HBASE-13788:
-

Since all column qualifiers are legal for data, it feels like using in-band 
signaling for custom formatters is a mistake.  What about passing formatting 
directives as a separate argument?

 Shell comands do not support column qualifiers containing colon (:)
 ---

 Key: HBASE-13788
 URL: https://issues.apache.org/jira/browse/HBASE-13788
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
Reporter: Dave Latham
Assignee: Pankaj Kumar

 The shell interprets the colon within the qualifier as a delimiter to a 
 FORMATTER instead of part of the qualifier itself.
 Example from the mailing list:
 Hmph, I may have spoken too soon. I know I tested this at one point and
 it worked, but now I'm getting different results:
 On the new cluster, I created a duplicate test table:
 hbase(main):043:0 create 'content3', {NAME = 'x', BLOOMFILTER =
 'NONE', REPLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION =
 'NONE', MIN_VERSIONS = '0', TTL = '2147483647', BLOCKSIZE = '65536',
 IN_MEMORY = 'false', BLOCKCACHE = 'true'}
 Then I pull some data from the imported table:
 hbase(main):045:0 scan 'content', {LIMIT=1,
 STARTROW='A:9223370612089311807:twtr:57013379'}
 ROW  COLUMN+CELL
 
 A:9223370612089311807:twtr:570133798827921408
 column=x:twitter:username, timestamp=1424775595345, value=BERITA 
 INFORMASI!
 Then put it:
 hbase(main):046:0 put
 'content3','A:9223370612089311807:twtr:570133798827921408',
 'x:twitter:username', 'BERITA  INFORMASI!'
 But then when I query it, I see that I've lost the column qualifier
 :username:
 hbase(main):046:0 scan 'content3'
 ROW  COLUMN+CELL
  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
  timestamp=1432745301788, value=BERITA  INFORMASI!
 Even though I'm missing one of the qualifiers, I can at least filter on
 columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14122) Client API for determining if server side supports cell level security

2015-07-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633782#comment-14633782
 ] 

Andrew Purtell commented on HBASE-14122:


I have a patch about done. Maybe next time (smile) 

 Client API for determining if server side supports cell level security
 --

 Key: HBASE-14122
 URL: https://issues.apache.org/jira/browse/HBASE-14122
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.3.0


 Add a client API for determining if the server side supports cell level 
 security. 
 Ask the master, assuming as we do in many other instances that the master and 
 regionservers all have a consistent view of site configuration.
 Return {{true}} if all features required for cell level security are present, 
 {{false}} otherwise, or throw {{UnsupportedOperationException}} if the master 
 does not have support for the RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13212) Procedure V2 - master Create/Modify/Delete namespace

2015-07-20 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-13212:
---
Attachment: HBASE-13212.v0-master.patch

 Procedure V2 - master Create/Modify/Delete namespace
 

 Key: HBASE-13212
 URL: https://issues.apache.org/jira/browse/HBASE-13212
 Project: HBase
  Issue Type: Sub-task
  Components: master
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
  Labels: reliability
 Attachments: HBASE-13212.v0-master.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 master side, part of HBASE-12439
 starts up the procedure executor on the master
 and replaces the create/modify/delete namespace handlers with the procedure 
 version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12853:

 Priority: Major  (was: Minor)
Fix Version/s: 2.0.0

This issue has been unassigned, had no fix versipn targetted, and was listed at 
Minor priority. I do't find it surprising that there were no updates.

Personally  i think this will make a great feature addition that will help us 
tackle more workloads. Accordingly, ive chaned it to major and set a goal of 
2.0.

Please do not downplay the effort needed by whomever ends up implementing it by 
claiming it is trivial. The ASF is a 
[do-ocracy|http://www.apache.org/foundation/how-it-works.html#decision-making]; 
while all contributions are valuable please don't criticize the prioritization 
of othrr volunteers when you yourself have not prioritized the feature yourself.

 distributed write pattern to replace ad hoc 'salting'
 -

 Key: HBASE-12853
 URL: https://issues.apache.org/jira/browse/HBASE-12853
 Project: HBase
  Issue Type: New Feature
Reporter: Michael Segel 
 Fix For: 2.0.0


 In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
 that while 'salting' alleviated  regional hot spotting, it increased the 
 complexity required to utilize the data.  
 Through the use of coprocessors, it should be possible to offer a method 
 which distributes the data on write across the cluster and then manages 
 reading the data returning a sort ordered result set, abstracting the 
 underlying process. 
 On table creation, a flag is set to indicate that this is a parallel table. 
 On insert in to the table, if the flag is set to true then a prefix is added 
 to the key.  e.g. region server#- or region server #|| where the region 
 server # is an integer between 1 and the number of region servers defined.  
 On read (scan) for each region server defined, a separate scan is created 
 adding the prefix. Since each scan will be in sort order, its possible to 
 strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633796#comment-14633796
 ] 

Ted Yu commented on HBASE-13992:


[~ted.m]:
The patch ending in .5 was not picked up by QA bot.
Name your patch with .patch or .txt extension.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14098) Allow dropping caches behind compactions

2015-07-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634062#comment-14634062
 ] 

Elliott Clark edited comment on HBASE-14098 at 7/20/15 8:52 PM:


Patch with test fixes for stripe compaction.

I put it up for review here: https://reviews.facebook.net/D42681


was (Author: eclark):
Patch with test fixes for strip compaction.

I put it up for review here: https://reviews.facebook.net/D42681

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14098) Allow dropping caches behind compactions

2015-07-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14098:
--
Attachment: HBASE-14098-v3.patch

Patch with test fixes for strip compaction.

I put it up for review here: https://reviews.facebook.net/D42681

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-07-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12751:
--
Attachment: HBASE-12751-v13.patch

 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
 HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
 HBASE-12751-v13.patch, HBASE-12751-v2.patch, HBASE-12751-v3.patch, 
 HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
 HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
 HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634141#comment-14634141
 ] 

Hadoop QA commented on HBASE-13992:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746162/HBASE-13992.5.patch
  against master branch at commit 0f614a1c44e1887ca7177b66bb6208b6e69db7e1.
  ATTACHMENT ID: 12746162

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:red}-1 javac{color}.  The applied patch generated 49 javac compiler 
warnings (more than the master's current 24 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1878 checkstyle errors (more than the master's current 1871 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+new SparkConf().setAppName(JavaHBaseStreamingBulkPutExample  + 
tableName + : + port + : + tableName);
+configBroadcast: 
Broadcast[SerializableWritable[Configuration]],
+  private def getConf(configBroadcast: 
Broadcast[SerializableWritable[Configuration]]): Configuration = {
+   configBroadcast: 
Broadcast[SerializableWritable[Configuration]],
+ * or security issues. For instance, an Array[AnyRef] can hold any type T, 
but may lose primitive
+val sparkConf = new SparkConf().setAppName(HBaseBulkPutTimestampExample  
+ tableName +   + columnFamily)
+(Bytes.toBytes(6), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(1,
+(Bytes.toBytes(7), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(2,
+(Bytes.toBytes(8), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(3,

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14840//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14840//artifact/patchprocess/patchReleaseAuditWarnings.txt
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14840//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14840//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14840//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14840//console

This message is automatically generated.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.5.patch, HBASE-13992.patch, 
 HBASE-13992.patch.3, HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-07-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634150#comment-14634150
 ] 

Lars Hofhansl commented on HBASE-12751:
---

What if we write without WAL? Does that matter?

 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
 HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
 HBASE-12751-v13.patch, HBASE-12751-v2.patch, HBASE-12751-v3.patch, 
 HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
 HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
 HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-07-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634153#comment-14634153
 ] 

Elliott Clark commented on HBASE-12751:
---

An edit that's skipping the wal will still queue an empty edit to the wal, just 
not wait on it to append or sync. ( that's the same as what's there now just in 
a different order )

 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
 HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
 HBASE-12751-v13.patch, HBASE-12751-v2.patch, HBASE-12751-v3.patch, 
 HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
 HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
 HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14098) Allow dropping caches behind compactions

2015-07-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14098:
--
Attachment: HBASE-14098-v4.patch

Style fixes

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11276) Add back support for running ChaosMonkey as standalone tool

2015-07-20 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634077#comment-14634077
 ] 

Enis Soztutar commented on HBASE-11276:
---

bq. The table/cf is truly optional, but there would be a default value for them 
in the tool
Not necessarily. Not all monkeys do table actions. For example 
{{serverKillingMonkey}}. That is why having the table optional is important. In 
case monkey does not use a table, it is fine to pass null or some non-existing 
table. 

bq. IMHO, if we require user to make sure the testing table/cf already exists 
before running monkeys, then table/cf shouldn't be optional, or if we keep it 
as is, we will need the createSchema method to create the table when it doesn't 
exist.
What is the point of creating an empty table and doing operations on them from 
CM? CM runner should just assume the table will be created from the 
concurrently running test. It is fine if table does not exists since CM will 
fail the table action, but it will not affect correctness. 

 Add back support for running ChaosMonkey as standalone tool
 ---

 Key: HBASE-11276
 URL: https://issues.apache.org/jira/browse/HBASE-11276
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.96.0, 0.99.0
Reporter: Dima Spivak
Assignee: Yu Li
Priority: Minor
 Attachments: HBASE-11276.patch, HBASE-11276_v2.patch, 
 HBASE-11276_v3.patch


 [According to the ref 
 guide|http://hbase.apache.org/book/hbase.tests.html#integration.tests], it 
 was once possible to run ChaosMonkey as a standalone tool against a deployed 
 cluster. After 0.94, this is no longer possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14125) HBAse Backup/Restore Phase 2: Cancel backup

2015-07-20 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-14125:
-

 Summary: HBAse Backup/Restore Phase 2: Cancel backup
 Key: HBASE-14125
 URL: https://issues.apache.org/jira/browse/HBASE-14125
 Project: HBase
  Issue Type: New Feature
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


Cancel backup operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2015-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634145#comment-14634145
 ] 

Hudson commented on HBASE-12374:


FAILURE: Integrated in HBase-TRUNK # (See 
[https://builds.apache.org/job/HBase-TRUNK//])
HBASE-12374 Change DBEs to work with new BB based cell. (anoopsamjohn: rev 
0f614a1c44e1887ca7177b66bb6208b6e69db7e1)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/ByteBufferOutputStream.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeSeeker.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/OffheapKeyOnlyKeyValue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/StreamUtils.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java


 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-12374_v1.patch, HBASE-12374_v2.patch, 
 HBASE-12374_v3.patch


 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2015-07-20 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634067#comment-14634067
 ] 

Enis Soztutar commented on HBASE-14085:
---

bq. If anyone would find it useful for us to be able to do source-only releases 
before Tuesday I can rearrange what I have to get the incremental improvement 
for the source artifact into master.
We have been doing source + binary releases for convenience. For 1.0.2RC1, i 
think we can wait. 

 Correct LICENSE and NOTICE files in artifacts
 -

 Key: HBASE-14085
 URL: https://issues.apache.org/jira/browse/HBASE-14085
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker

 +Problems:
 * checked LICENSE/NOTICE on binary
 ** binary artifact LICENSE file has not been updated to include the 
 additional license terms for contained third party dependencies
 ** binary artifact NOTICE file does not include a copyright line
 ** binary artifact NOTICE file does not appear to propagate appropriate info 
 from the NOTICE files from bundled dependencies
 * checked NOTICE on source
 ** source artifact NOTICE file does not include a copyright line
 ** source NOTICE file includes notices for third party dependencies not 
 included in the artifact
 * checked NOTICE files shipped in maven jars
 ** copyright line only says 2015 when it's very likely the contents are under 
 copyright prior to this year
 * nit: NOTICE file on jars in maven say HBase - ${module} rather than 
 Apache HBase - ${module} as required 
 refs:
 http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
 http://www.apache.org/dev/licensing-howto.html#binary
 http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-07-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634072#comment-14634072
 ] 

Elliott Clark commented on HBASE-12751:
---

Attaching the rebased patch.

bq.How is order in memstore for sure matching the append order to WAL now?
All edits are queued in the wal before they are put in the memstore. After they 
are queued the first thing that's done is to assign a sequence number. The 
insert to memstore waits on that number to be added to the wal edit.
Once that number is found it's used by the memstore for ordering. Our 
comparitor on the map does that.

 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
 HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
 HBASE-12751-v13.patch, HBASE-12751-v2.patch, HBASE-12751-v3.patch, 
 HBASE-12751-v4.patch, HBASE-12751-v5.patch, HBASE-12751-v6.patch, 
 HBASE-12751-v7.patch, HBASE-12751-v8.patch, HBASE-12751-v9.patch, 
 HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14098) Allow dropping caches behind compactions

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633970#comment-14633970
 ] 

Hadoop QA commented on HBASE-14098:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746144/HBASE-14098-v2.patch
  against master branch at commit 0f614a1c44e1887ca7177b66bb6208b6e69db7e1.
  ATTACHMENT ID: 12746144

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1880 checkstyle errors (more than the master's current 1871 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  .getScannersForStoreFiles(storeFilesToScan, cacheBlocks, usePread, 
isCompaction, false, matcher,
+  in = new FSDataInputStreamWrapper(fs, this.link, canUseDropBehind  
cacheConf.shouldDropBehindCompaction());
+  in = new FSDataInputStreamWrapper(fs, referencePath, canUseDropBehind  
cacheConf.shouldDropBehindCompaction());
+  in = new FSDataInputStreamWrapper(fs, this.getPath(), canUseDropBehind 
 cacheConf.shouldDropBehindCompaction());
+  scanners = createFileScanners(readersToClose, smallestReadPoint, 
store.throttleCompaction(request.getSize()));
+  scanners = createFileScanners(request.getFiles(), smallestReadPoint, 
store.throttleCompaction(request.getSize()));
+  fd.maxMVCCReadpoint  0, fd.maxTagsLength  0, 
store.throttleCompaction(request.getSize()));

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.compactions.TestStripeCompactionPolicy
  org.apache.hadoop.hbase.regionserver.TestStripeCompactor

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14839//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14839//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14839//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14839//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14839//console

This message is automatically generated.

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-13992:

Attachment: HBASE-13992.5.patch

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.5.patch, HBASE-13992.patch, 
 HBASE-13992.patch.3, HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633978#comment-14633978
 ] 

Lars Hofhansl commented on HBASE-14099:
---

Are you sure this 100% correct? First/LastKeyvalue also have the proper 
timestamp and code to sort before/after _any_ KeyValue. Just comparing the row 
key does not do the same. It might be correct in these cases... But are you 
_absolutely_ sure?

What if the first Cell in a StoreFile is a delete marker?


 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, HBASE-14099_4.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633980#comment-14633980
 ] 

Ted Malaska commented on HBASE-13992:
-

[~ted_yu] I uploaded a new patch.  No code changes.  Just a new name.



 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.5.patch, HBASE-13992.patch, 
 HBASE-13992.patch.3, HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633990#comment-14633990
 ] 

Hudson commented on HBASE-14099:


FAILURE: Integrated in HBase-TRUNK #6665 (See 
[https://builds.apache.org/job/HBase-TRUNK/6665/])
HBASE-14099 StoreFile.passesKeyRangeFilter need not create Cells from the 
(ramkrishna: rev 7e4cd5982071540fdb75698a0278aac391ccc540)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java


 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, HBASE-14099_4.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14124) Failed backup is not handled properly in incremental mode

2015-07-20 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-14124:
-

 Summary: Failed backup is not handled properly in incremental mode
 Key: HBASE-14124
 URL: https://issues.apache.org/jira/browse/HBASE-14124
 Project: HBase
  Issue Type: Bug
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


BackupHandler failedBackup method does not clean failed incremental backup 
artefacts on HDFS (and in HBase).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13788) Shell comands do not support column qualifiers containing colon (:)

2015-07-20 Thread Ben Shuai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633957#comment-14633957
 ] 

Ben Shuai commented on HBASE-13788:
---

By the way, what Dave suggested is same thing as what I suggested, except that 
it is from hbase-site.xml (or one can use java -D option), not from command 
line of get or scan inside hbase shell if I did not misunderstand what Dave 
suggested.

 Shell comands do not support column qualifiers containing colon (:)
 ---

 Key: HBASE-13788
 URL: https://issues.apache.org/jira/browse/HBASE-13788
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
Reporter: Dave Latham
Assignee: Pankaj Kumar

 The shell interprets the colon within the qualifier as a delimiter to a 
 FORMATTER instead of part of the qualifier itself.
 Example from the mailing list:
 Hmph, I may have spoken too soon. I know I tested this at one point and
 it worked, but now I'm getting different results:
 On the new cluster, I created a duplicate test table:
 hbase(main):043:0 create 'content3', {NAME = 'x', BLOOMFILTER =
 'NONE', REPLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION =
 'NONE', MIN_VERSIONS = '0', TTL = '2147483647', BLOCKSIZE = '65536',
 IN_MEMORY = 'false', BLOCKCACHE = 'true'}
 Then I pull some data from the imported table:
 hbase(main):045:0 scan 'content', {LIMIT=1,
 STARTROW='A:9223370612089311807:twtr:57013379'}
 ROW  COLUMN+CELL
 
 A:9223370612089311807:twtr:570133798827921408
 column=x:twitter:username, timestamp=1424775595345, value=BERITA 
 INFORMASI!
 Then put it:
 hbase(main):046:0 put
 'content3','A:9223370612089311807:twtr:570133798827921408',
 'x:twitter:username', 'BERITA  INFORMASI!'
 But then when I query it, I see that I've lost the column qualifier
 :username:
 hbase(main):046:0 scan 'content3'
 ROW  COLUMN+CELL
  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
  timestamp=1432745301788, value=BERITA  INFORMASI!
 Even though I'm missing one of the qualifiers, I can at least filter on
 columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13788) Shell comands do not support column qualifiers containing colon (:)

2015-07-20 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633993#comment-14633993
 ] 

Dave Latham commented on HBASE-13788:
-

Not sure it's quite the same.  My suggestion is not have a delimiter at all.  
My ruby syntax isn't good, but something like:
{noformat}
hbase scan 't1', {COLUMNS = ['cf:qualifier1', 'cf:qualifier2', 
'cf:qualifier3'], FORMATTERS = ['cf:qualifier2' = 'toInt', 'cf:qualifier3' = 
'toString']}
{noformat}

Ben's suggestion sounds reasonable to me as well, so long as the default 
switches from ':' to none (presumably in 2.0 as it would be incompatible 
otherwise).

 Shell comands do not support column qualifiers containing colon (:)
 ---

 Key: HBASE-13788
 URL: https://issues.apache.org/jira/browse/HBASE-13788
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
Reporter: Dave Latham
Assignee: Pankaj Kumar

 The shell interprets the colon within the qualifier as a delimiter to a 
 FORMATTER instead of part of the qualifier itself.
 Example from the mailing list:
 Hmph, I may have spoken too soon. I know I tested this at one point and
 it worked, but now I'm getting different results:
 On the new cluster, I created a duplicate test table:
 hbase(main):043:0 create 'content3', {NAME = 'x', BLOOMFILTER =
 'NONE', REPLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION =
 'NONE', MIN_VERSIONS = '0', TTL = '2147483647', BLOCKSIZE = '65536',
 IN_MEMORY = 'false', BLOCKCACHE = 'true'}
 Then I pull some data from the imported table:
 hbase(main):045:0 scan 'content', {LIMIT=1,
 STARTROW='A:9223370612089311807:twtr:57013379'}
 ROW  COLUMN+CELL
 
 A:9223370612089311807:twtr:570133798827921408
 column=x:twitter:username, timestamp=1424775595345, value=BERITA 
 INFORMASI!
 Then put it:
 hbase(main):046:0 put
 'content3','A:9223370612089311807:twtr:570133798827921408',
 'x:twitter:username', 'BERITA  INFORMASI!'
 But then when I query it, I see that I've lost the column qualifier
 :username:
 hbase(main):046:0 scan 'content3'
 ROW  COLUMN+CELL
  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
  timestamp=1432745301788, value=BERITA  INFORMASI!
 Even though I'm missing one of the qualifiers, I can at least filter on
 columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633977#comment-14633977
 ] 

Ted Malaska commented on HBASE-13992:
-

[~lhofhansl] So maybe I read your comment wrong.

Patch 5 returns RDD[(ImmutableBytesWritable, Result)] from hbaseBulkGet

What would you like it to return?  Maybe I read it wrong but doesn't 
TableInputFormat return (ImmutableBytesWritable, Result)?

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.patch, HBASE-13992.patch.3, 
 HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633981#comment-14633981
 ] 

Lars Hofhansl commented on HBASE-13992:
---

No, that's right. Hmm. Why does HBaseRDD.collect() return something different? 
(I think that part confused me)

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.5.patch, HBASE-13992.patch, 
 HBASE-13992.patch.3, HBASE-13992.patch.4, HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate and let me know what I can do to make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14123) HBase Backup/Restore Phase 2

2015-07-20 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-14123:
-

 Summary: HBase Backup/Restore Phase 2
 Key: HBASE-14123
 URL: https://issues.apache.org/jira/browse/HBASE-14123
 Project: HBase
  Issue Type: Umbrella
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14099) StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start and stop Row

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633684#comment-14633684
 ] 

Hadoop QA commented on HBASE-14099:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746100/HBASE-14099_3.patch
  against master branch at commit 88038cf473a17a6b059902e080f1bc10d338b7c9.
  ATTACHMENT ID: 12746100

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFastFail

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14837//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14837//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14837//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14837//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14837//console

This message is automatically generated.

 StoreFile.passesKeyRangeFilter need not create Cells from the Scan's start 
 and stop Row
 ---

 Key: HBASE-14099
 URL: https://issues.apache.org/jira/browse/HBASE-14099
 Project: HBase
  Issue Type: Improvement
  Components: Performance, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-14099.patch, HBASE-14099_1.patch, 
 HBASE-14099_2.patch, HBASE-14099_3.patch, storefile.png


 During profiling saw that the code here in passesKeyRangeFilter in Storefile
 {code}
   KeyValue smallestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createFirstOnRow(scan.getStopRow()) : 
 KeyValueUtil.createFirstOnRow(scan
   .getStartRow());
   KeyValue largestScanKeyValue = scan.isReversed() ? KeyValueUtil
   .createLastOnRow(scan.getStartRow()) : 
 KeyValueUtil.createLastOnRow(scan
   .getStopRow());
 {code}
 This row need not be copied now considering that we have 
 CellComparator.compareRows(Cell, byte[]). 
 We have already refactored the firstKeyKv and lastKeyKV as part of other 
 JIRAs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-20 Thread Michael Segel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14633793#comment-14633793
 ] 

Michael Segel  commented on HBASE-12853:


Nothing new? 

Seriously?

This is a trivial feature.


 distributed write pattern to replace ad hoc 'salting'
 -

 Key: HBASE-12853
 URL: https://issues.apache.org/jira/browse/HBASE-12853
 Project: HBase
  Issue Type: New Feature
Reporter: Michael Segel 
Priority: Minor

 In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
 that while 'salting' alleviated  regional hot spotting, it increased the 
 complexity required to utilize the data.  
 Through the use of coprocessors, it should be possible to offer a method 
 which distributes the data on write across the cluster and then manages 
 reading the data returning a sort ordered result set, abstracting the 
 underlying process. 
 On table creation, a flag is set to indicate that this is a parallel table. 
 On insert in to the table, if the flag is set to true then a prefix is added 
 to the key.  e.g. region server#- or region server #|| where the region 
 server # is an integer between 1 and the number of region servers defined.  
 On read (scan) for each region server defined, a separate scan is created 
 adding the prefix. Since each scan will be in sort order, its possible to 
 strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14098) Allow dropping caches behind compactions

2015-07-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14098:
--
Attachment: HBASE-14098-v5.patch

 Allow dropping caches behind compactions
 

 Key: HBASE-14098
 URL: https://issues.apache.org/jira/browse/HBASE-14098
 Project: HBase
  Issue Type: Bug
  Components: Compaction, hadoop2, HFile
Affects Versions: 2.0.0, 1.3.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
 HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
 HBASE-14098.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13992) Integrate SparkOnHBase into HBase

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634388#comment-14634388
 ] 

Hadoop QA commented on HBASE-13992:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12746201/HBASE-13992.6.patch
  against master branch at commit 7ce318dd3be9df0ee1c025b4792ded0161aa2c9c.
  ATTACHMENT ID: 12746201

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:red}-1 javac{color}.  The applied patch generated 44 javac compiler 
warnings (more than the master's current 24 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ * or security issues. For instance, an Array[AnyRef] can hold any 
type T, but may lose primitive
+  putRecord._2.foreach((putValue) = put.addColumn(putValue._1, 
putValue._2, timeStamp, putValue._3))
+ val sparkConf = new SparkConf().setAppName(HBaseBulkPutExample  + 
tableName +   + columnFamily)
+ (Bytes.toBytes(1), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(1,
+ (Bytes.toBytes(2), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(2,
+ (Bytes.toBytes(3), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(3,
+ (Bytes.toBytes(4), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(4,
+ (Bytes.toBytes(5), Array((Bytes.toBytes(columnFamily), 
Bytes.toBytes(1), Bytes.toBytes(5

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestImportTSVWithOperationAttributes
  org.apache.hadoop.hbase.mapreduce.TestRowCounter

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT.testMultipleTimestampsInSingleDelete(EndToEndCoveredIndexingIT.java:428)
at 
org.apache.phoenix.hbase.index.balancer.IndexLoadBalancerIT.testRandomAssignmentDuringIndexTableEnable(IndexLoadBalancerIT.java:265)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14845//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14845//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14845//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14845//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14845//console

This message is automatically generated.

 Integrate SparkOnHBase into HBase
 -

 Key: HBASE-13992
 URL: https://issues.apache.org/jira/browse/HBASE-13992
 Project: HBase
  Issue Type: New Feature
  Components: spark
Reporter: Ted Malaska
Assignee: Ted Malaska
 Fix For: 2.0.0

 Attachments: HBASE-13992.5.patch, HBASE-13992.6.patch, 
 HBASE-13992.patch, HBASE-13992.patch.3, HBASE-13992.patch.4, 
 HBASE-13992.patch.5


 This Jira is to ask if SparkOnHBase can find a home in side HBase core.
 Here is the github: 
 https://github.com/cloudera-labs/SparkOnHBase
 I am the core author of this project and the license is Apache 2.0
 A blog explaining this project is here
 http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
 A spark Streaming example is here
 http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
 A real customer using this in produce is blogged here
 http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/
 Please debate 

[jira] [Commented] (HBASE-14119) Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.

2015-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634394#comment-14634394
 ] 

Hudson commented on HBASE-14119:


FAILURE: Integrated in HBase-TRUNK #6667 (See 
[https://builds.apache.org/job/HBase-TRUNK/6667/])
HBASE-14119 Show error message instead of stack traces in hbase shell commands. 
(Apekshit) (matteo.bertozzi: rev 7ce318dd3be9df0ee1c025b4792ded0161aa2c9c)
* hbase-shell/src/main/ruby/shell/commands.rb


 Show meaningful error messages instead of stack traces in hbase shell 
 commands. Fixing few commands in this jira.
 -

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2

 Attachments: HBASE-14119-branch-1.patch, HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 In the future, everyone should check and catch exceptions. Meaningful error 
 messages should be shown instead of stack traces. For debugging purposes, 
 'hbase shell -d' can be used which outputs a detailed stack trace.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
   at 
 org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Drop the named namespace. The namespace must be empty.
 {noformat}
 * fix error message in close_region
 {noformat}
 hbase(main):007:0 close_region sdf
 ERROR: sdf
 {noformat}
 * delete_snapshot throws exception too.
 {noformat}
 ERROR: org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: 
 Snapshot 'sdf' doesn't exist on the filesystem
   at 
 org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:270)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:452)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44261)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Delete a specified snapshot. Examples:
   hbase delete_snapshot 'snapshotName',
 {noformat}
 other commands, when 

[jira] [Commented] (HBASE-14081) (outdated) references to SVN/trunk in documentation

2015-07-20 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634393#comment-14634393
 ] 

Gabor Liptak commented on HBASE-14081:
--

[~ndimiduk] Could you comment on the SVN usage? Thanks

 (outdated) references to SVN/trunk in documentation
 ---

 Key: HBASE-14081
 URL: https://issues.apache.org/jira/browse/HBASE-14081
 Project: HBase
  Issue Type: Bug
Reporter: Gabor Liptak
Priority: Minor

 Looking at
 https://svn.apache.org/repos/asf/hbase/tags/
 SVN is no longer seems to be updated.
 Is http://hbase.apache.org/ being built from Git? 
 https://issues.apache.org/jira/browse/INFRA-7768 is also being discussed.
 Can those updated to master (or removed)?
 Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14119) Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.

2015-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634395#comment-14634395
 ] 

Hadoop QA commented on HBASE-14119:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12746196/HBASE-14119-branch-1.patch
  against branch-1 branch at commit 0f614a1c44e1887ca7177b66bb6208b6e69db7e1.
  ATTACHMENT ID: 12746196

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14844//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14844//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14844//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14844//console

This message is automatically generated.

 Show meaningful error messages instead of stack traces in hbase shell 
 commands. Fixing few commands in this jira.
 -

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2

 Attachments: HBASE-14119-branch-1.patch, HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 In the future, everyone should check and catch exceptions. Meaningful error 
 messages should be shown instead of stack traces. For debugging purposes, 
 'hbase shell -d' can be used which outputs a detailed stack trace.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 

[jira] [Commented] (HBASE-14119) Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.

2015-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634429#comment-14634429
 ] 

Hudson commented on HBASE-14119:


FAILURE: Integrated in HBase-1.2-IT #60 (See 
[https://builds.apache.org/job/HBase-1.2-IT/60/])
HBASE-14119 Show error message instead of stack traces in hbase shell commands. 
(Apekshit) (matteo.bertozzi: rev 6e558816cdfde23fac672897617c8c5fd6258b29)
* hbase-shell/src/main/ruby/shell/commands.rb


 Show meaningful error messages instead of stack traces in hbase shell 
 commands. Fixing few commands in this jira.
 -

 Key: HBASE-14119
 URL: https://issues.apache.org/jira/browse/HBASE-14119
 Project: HBase
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2

 Attachments: HBASE-14119-branch-1.patch, HBASE-14119.patch


 This isn't really a functional bug, just more about erroring out cleanly.
 In the future, everyone should check and catch exceptions. Meaningful error 
 messages should be shown instead of stack traces. For debugging purposes, 
 'hbase shell -d' can be used which outputs a detailed stack trace.
 * the shell commands assign, move, unassign and merge_region can throw the 
 following error if given an invalid argument:
 {noformat}
 hbase(main):032:0 unassign 'adsfdsafdsa'
 ERROR: org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.UnknownRegionException: adsfdsafdsa
   at org.apache.hadoop.hbase.master.HMaster.unassign(HMaster.java:1562)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
   at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
 Here is some help for this command:
 Unassign a region. Unassign will close region in current location and then
 reopen it again.  Pass 'true' to force the unassignment ('force' will clear
 all in-memory state in master before the reassign. If results in
 double assignment use hbck -fix to resolve. To be used by experts).
 Use with caution.  For expert use only.  Examples:
   hbase unassign 'REGIONNAME'
   hbase unassign 'REGIONNAME', true
 hbase(main):033:0 
 {noformat}
 * drop_namespace, describe_namespace throw stack trace too.
 {noformat}
 hbase(main):002:0 drop_namespace SDf
 ERROR: org.apache.hadoop.hbase.NamespaceNotFoundException: SDf
   at 
 org.apache.hadoop.hbase.master.TableNamespaceManager.remove(TableNamespaceManager.java:175)
   at 
 org.apache.hadoop.hbase.master.HMaster.deleteNamespace(HMaster.java:2119)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteNamespace(MasterRpcServices.java:430)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44279)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Drop the named namespace. The namespace must be empty.
 {noformat}
 * fix error message in close_region
 {noformat}
 hbase(main):007:0 close_region sdf
 ERROR: sdf
 {noformat}
 * delete_snapshot throws exception too.
 {noformat}
 ERROR: org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: 
 Snapshot 'sdf' doesn't exist on the filesystem
   at 
 org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:270)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:452)
   at 
 org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44261)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
   at java.lang.Thread.run(Thread.java:745)
 Here is some help for this command:
 Delete a specified snapshot. Examples:
   hbase delete_snapshot 'snapshotName',
 {noformat}
 other commands, when given 

[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-07-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634174#comment-14634174
 ] 

Andrew Purtell commented on HBASE-6721:
---

bq.  Let me suggest that we commit what we have with a cursory review to a 
branch and then make progress there.

+1
Ping me for assistance when you're ready to commit to a branch [~toffer] 

 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu
 Attachments: 6721-master-webUI.patch, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
 HBASE-6721_10.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
 HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
 HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
 HBASE-6721_94_7.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, 
 HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, HBASE-6721_trunk2.patch


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14097) Log link to client scan troubleshooting section when scanner exceptions happen.

2015-07-20 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634235#comment-14634235
 ] 

Matteo Bertozzi commented on HBASE-14097:
-

+1

 Log link to client scan troubleshooting section when scanner exceptions 
 happen.
 ---

 Key: HBASE-14097
 URL: https://issues.apache.org/jira/browse/HBASE-14097
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Attachments: HBASE-14097.patch


 As per description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13981) Fix ImportTsv spelling and usage issues

2015-07-20 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634344#comment-14634344
 ] 

Gabor Liptak commented on HBASE-13981:
--

Would this refactor be done in a followup Jira?

 Fix ImportTsv spelling and usage issues
 ---

 Key: HBASE-13981
 URL: https://issues.apache.org/jira/browse/HBASE-13981
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Gabor Liptak
  Labels: beginner
 Fix For: 2.0.0, 1.3.0

 Attachments: HBASE-13981.1.patch, HBASE-13981.2.patch, 
 HBASE-13981.3.patch, HBASE-13981.4.patch


 The {{ImportTsv}} tool has various spelling and formatting issues. Fix those.
 In code:
 {noformat}
   public final static String ATTRIBUTE_SEPERATOR_CONF_KEY = 
 attributes.seperator;
 {noformat}
 It is separator.
 In usage text:
 {noformat}
 input data. Another special columnHBASE_TS_KEY designates that this column 
 should be
 {noformat}
 Space missing.
 {noformat}
 Record with invalid timestamps (blank, non-numeric) will be treated as bad 
 record.
 {noformat}
 Records ... as bad records - plural missing twice.
 {noformat}
 HBASE_ATTRIBUTES_KEY can be used to specify Operation Attributes per record.
  Should be specified as key=value where -1 is used 
  as the seperator.  Note that more than one OperationAttributes can be 
 specified.
 {noformat}
 - Remove line wraps and indentation. 
 - Fix separator.
 - Fix wrong separator being output, it is not -1 (wrong constant use in 
 code)
 - General wording/style could be better (eg. last sentence now uses 
 OperationAttributes without a space).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14126) I'm using play framework, creating a sample Hbase project. And I keep getting this error: tried to access method com.google.common.base.Stopwatch.init()V from class

2015-07-20 Thread Lesley Cheung (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634357#comment-14634357
 ] 

Lesley Cheung commented on HBASE-14126:
---

Hi, 

I tried to add guava dependency in build.sbt, like this: dependencyOverrides 
+=com.google.guava % guava % 12.0.1 intransitive() , and I also tried 
other guava version which is lower than version 17, but it couldn't work. Do 
you have any other idea?

Thank you so much!

 I'm using play framework, creating a sample Hbase project. And I keep getting 
 this error: tried to access method com.google.common.base.Stopwatch.init()V 
 from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
 -

 Key: HBASE-14126
 URL: https://issues.apache.org/jira/browse/HBASE-14126
 Project: HBase
  Issue Type: Task
 Environment: Ubuntu; Play Framework 2.4.2; Hbase 1.0
Reporter: Lesley Cheung
Assignee: Lesley Cheung

 The simple Hbase project works in Maven. But when I use play framework , it 
 keeps showing this error. I modified the lib dependency many times, but I 
 just can't eliminate this error. Some one please help me! 
  
 java.lang.IllegalAccessError: tried to access method 
 com.google.common.base.Stopwatch.init()V from class 
 org.apache.hadoop.hbase.zookeeper.MetaTableLocator
   at 
 org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:434)
   at 
 org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1123)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1110)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1262)
   at 
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1126)
   at 
 org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369)
   at 
 org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320)
   at 
 org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
   at 
 org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1496)
   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1119)
   at utils.BigQueue.run(BigQueue.java:68)
   at 
 akka.actor.LightArrayRevolverScheduler$$anon$2$$anon$1.run(Scheduler.scala:242)
   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
   at 
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
   at 
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
   at 
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
   at 
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >