[jira] [Commented] (HBASE-10454) Tags presence file info can be wrong in HFiles when PrefixTree encoding is used

2014-02-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890479#comment-13890479
 ] 

ramkrishna.s.vasudevan commented on HBASE-10454:


+1 on patch.

 Tags presence file info can be wrong in HFiles when PrefixTree encoding is 
 used
 ---

 Key: HBASE-10454
 URL: https://issues.apache.org/jira/browse/HBASE-10454
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Fix For: 0.98.0

 Attachments: HBASE-10454.patch, HBASE-10454_V2.patch


 We always encode tags in case of Prefix tree now and so the code path, while 
 decoding, not checking what is there in FileInfo. So functionally no issues 
 now.
 If we do HBASE-10453 this change will be very important to make sure BC for 
 old files.
 We use the file info MAX_TAGS_LEN to know the presence of tags in a file. In 
 case of prefix tree we always have tags in files even if all kvs have 0 tags. 
  So better we can add this file info into prefix tree encoded HFiles.  Now it 
 get missed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10463) Filter on columns containing numerics yield wrong results

2014-02-04 Thread Deepa Vasanthkumar (JIRA)
Deepa Vasanthkumar created HBASE-10463:
--

 Summary: Filter on columns containing numerics yield wrong results
 Key: HBASE-10463
 URL: https://issues.apache.org/jira/browse/HBASE-10463
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.8
Reporter: Deepa Vasanthkumar


Used SingleColumnValueFilter with CompareFilter.CompareOp.GREATER_OR_EQUAL for 
filtering the scan result. 
However the columns which have numeric value, scan result is not correct, 
because of lexicographic comparison.

Does HBase support numeric value filters (for equal, greater or equal..) for 
columns ? If not, can we add it?




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

Attachment: HBASE-10447_trunk_1.patch

This should be the simplest patches.  The constructor that is changed in this 
patch is the one that is called over in Compaction and flush.  In 0.96 case it 
does not add the observers.  So should be the case in 0.98 and trunk.  Do we 
need the reset to happen in compaction? I don't think so.  
@apurtell,[~lhofhansl], [~anoop.hbase]
Pls review this patch and if we are ok we could commit this.


 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

Status: Patch Available  (was: Open)

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

Fix Version/s: (was: 0.96.2)
Affects Version/s: (was: 0.96.1.1)
   Status: Open  (was: Patch Available)

removing 0.96 from the fix and affected version.

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10463) Filter on columns containing numerics yield wrong results

2014-02-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890515#comment-13890515
 ] 

Anoop Sam John commented on HBASE-10463:


You can use the below constructor in SCVF?
{code}
public SingleColumnValueFilter(final byte [] family, final byte [] qualifier,
  final CompareOp compareOp, final WritableByteArrayComparable comparator) {
{code}
Instead of the BinaryComparator  you can make a comparator for the numeric 
values and use.

 Filter on columns containing numerics yield wrong results
 -

 Key: HBASE-10463
 URL: https://issues.apache.org/jira/browse/HBASE-10463
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.8
Reporter: Deepa Vasanthkumar
   Original Estimate: 168h
  Remaining Estimate: 168h

 Used SingleColumnValueFilter with CompareFilter.CompareOp.GREATER_OR_EQUAL 
 for filtering the scan result. 
 However the columns which have numeric value, scan result is not correct, 
 because of lexicographic comparison.
 Does HBase support numeric value filters (for equal, greater or equal..) for 
 columns ? If not, can we add it?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10449) Wrong execution pool configuration in HConnectionManager

2014-02-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890522#comment-13890522
 ] 

Nicolas Liochon commented on HBASE-10449:
-

Note that we're not exactly changing a default. 
The code wanted to do:
 {panel}Create up to 'max' (default: 256) threads. Expires them if they are not 
used for 10 seconds, excepted for 'core' (default 0) of them. If there is more 
than 'max' tasks, queue them.{panel} 

Actually it was doing:
{panel}Create a single thread, queue all the tasks for this thread.{panel}

So the patch actually implements that was supposed to be implemented (or tries 
to implement it at least :-) ). I

Moreover, it's a regression from HBASE-9917, so actually 96.0 really uses 256 
threads. It's a *96.1* issue only. But yes, it does have an impact on 
performances, and this impact can be good or bad. That's why I would like it to 
be in the .98 RC, and also why I think it's simpler to have the same defaults 
on all versions.

Lastly, and unrelated, we didn't have a limit of the number of threads before 
the .96. I'm wondering if we don't have an impact if a server hangs. The client 
may ends up with all its connections stuck to this server, until it timeouts.

 Wrong execution pool configuration in HConnectionManager
 

 Key: HBASE-10449
 URL: https://issues.apache.org/jira/browse/HBASE-10449
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Critical
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10449.v1.patch


 There is a confusion in the configuration of the pool. The attached patch 
 fixes this. This may change the client performances, as we were using a 
 single thread.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10455) cleanup InterruptedException management

2014-02-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890528#comment-13890528
 ] 

Nicolas Liochon commented on HBASE-10455:
-

bq. TestZKSecretWatcher may need fixing after this, if the precommit failure is 
not spurious
I've ran the test 30 times locally, it works.

bq. Is this the kind of thing we can catch with FindBugs?
I don't know how to configure FindBugs for this. It could be a specific rule 
today. A long time ago, I've added some rules to PMD it was not difficult but 
not trivial either (a few hours of work).

 cleanup InterruptedException management
 ---

 Key: HBASE-10455
 URL: https://issues.apache.org/jira/browse/HBASE-10455
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10455.v1.patch


 4 changes in this code:
 1) When caught and rethrowed as a IOException we always rethrow 
 InterruptedIOException
 2) When we were both throwing an exception AND resetting the interrupt status 
 we only throw an exception now.
 3) When we were trying to reset the status by Thread.interrupted() (which 
 does not do that), we now do it for real with a 
 Thread.currentThread.interrupt().
 4) Sometimes, we were rethrowing something else then InterruptedIOException, 
 while the contract would have allowed it. I've changed this as well.
 This patch does not make means that we're fine when we're interrupted, but 
 we're globally cleaner at least. I will then create other patches specific to 
 some parts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10454) Tags presence file info can be wrong in HFiles when PrefixTree encoding is used

2014-02-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890538#comment-13890538
 ] 

Anoop Sam John commented on HBASE-10454:


bq.Well, it wouldn't hurt to indicate in the FileInfo that tags are present. 
That part is fine.
[~apurtell]  Said above, I think you are fine with V2 patch. Will commit later 
today.

 Tags presence file info can be wrong in HFiles when PrefixTree encoding is 
 used
 ---

 Key: HBASE-10454
 URL: https://issues.apache.org/jira/browse/HBASE-10454
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Fix For: 0.98.0

 Attachments: HBASE-10454.patch, HBASE-10454_V2.patch


 We always encode tags in case of Prefix tree now and so the code path, while 
 decoding, not checking what is there in FileInfo. So functionally no issues 
 now.
 If we do HBASE-10453 this change will be very important to make sure BC for 
 old files.
 We use the file info MAX_TAGS_LEN to know the presence of tags in a file. In 
 case of prefix tree we always have tags in files even if all kvs have 0 tags. 
  So better we can add this file info into prefix tree encoded HFiles.  Now it 
 get missed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10413) Tablesplit.getLength returns 0

2014-02-04 Thread Lukas Nalezenec (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890550#comment-13890550
 ] 

Lukas Nalezenec commented on HBASE-10413:
-

Hi,
I know it is hacky. It is my first hbase commit, i was not sure how to do it so 
I asked 3 people and then published first draft as soon as possible. Everybody 
was fine with the solution :( .

The hacky solution is good enough for us - I have already deployed it 
yesterday.  I cant spent much more time on this. I need to close it by tomorrow.

How about this solution? I am not sure if it is the best way - it does not work 
with Scan ranges.

ToDos:
We need to filter regions by table
It would be nice to if we could filter size by column families.


https://github.com/apache/hbase/pull/8/files#diff-46ff60f1e27e3d77131acb7873050990R68


   HBaseAdmin admin = new HBaseAdmin(configuration);

ClusterStatus clusterStatus = admin.getClusterStatus();
CollectionServerName servers = clusterStatus.getServers();

for (ServerName serverName: servers) {
  ServerLoad serverLoad = clusterStatus.getLoad(serverName);

  for (Map.Entrybyte[], RegionLoad regionEntry: 
serverLoad.getRegionsLoad().entrySet()) {
byte[] regionId = regionEntry.getKey();
RegionLoad regionLoad = regionEntry.getValue();

long regionSize = 1024 * 1024 * (regionLoad.getMemStoreSizeMB() + 
regionLoad.getStorefileSizeMB());

sizeMap.put(regionId, regionSize);
  }
}

 Tablesplit.getLength returns 0
 --

 Key: HBASE-10413
 URL: https://issues.apache.org/jira/browse/HBASE-10413
 Project: HBase
  Issue Type: Bug
  Components: Client, mapreduce
Affects Versions: 0.96.1.1
Reporter: Lukas Nalezenec
Assignee: Lukas Nalezenec

 InputSplits should be sorted by length but TableSplit does not contain real 
 getLength implementation:
   @Override
   public long getLength() {
 // Not clear how to obtain this... seems to be used only for sorting 
 splits
 return 0;
   }
 This is causing us problem with scheduling - we have got jobs that are 
 supposed to finish in limited time but they get often stuck in last mapper 
 working on large region.
 Can we implement this method ? 
 What is the best way ?
 We were thinking about estimating size by size of files on HDFS.
 We would like to get Scanner from TableSplit, use startRow, stopRow and 
 column families to get corresponding region than computing size of HDFS for 
 given region and column family. 
 Update:
 This ticket was about production issue - I talked with guy who worked on this 
 and he said our production issue was probably not directly caused by 
 getLength() returning 0. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10319:


   Resolution: Fixed
Fix Version/s: 0.94.18
   0.96.2
   0.98.0
   Status: Resolved  (was: Patch Available)

committed to 0.94, 0.96, 0.98 and trunk

 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10461) table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in finally block

2014-02-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890552#comment-13890552
 ] 

ramkrishna.s.vasudevan commented on HBASE-10461:


+1.. Belated.

 table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in 
 finally block
 -

 Key: HBASE-10461
 URL: https://issues.apache.org/jira/browse/HBASE-10461
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: 10461-v1.txt


 If table.getRegionLocations() throws exception, table.close() would be 
 skipped, leaking the underlying resource



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10413) Tablesplit.getLength returns 0

2014-02-04 Thread Lukas Nalezenec (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890559#comment-13890559
 ] 

Lukas Nalezenec commented on HBASE-10413:
-

New version with per table region filtering:
https://github.com/apache/hbase/pull/8/files#diff-46ff60f1e27e3d77131acb7873050990R76

 Tablesplit.getLength returns 0
 --

 Key: HBASE-10413
 URL: https://issues.apache.org/jira/browse/HBASE-10413
 Project: HBase
  Issue Type: Bug
  Components: Client, mapreduce
Affects Versions: 0.96.1.1
Reporter: Lukas Nalezenec
Assignee: Lukas Nalezenec

 InputSplits should be sorted by length but TableSplit does not contain real 
 getLength implementation:
   @Override
   public long getLength() {
 // Not clear how to obtain this... seems to be used only for sorting 
 splits
 return 0;
   }
 This is causing us problem with scheduling - we have got jobs that are 
 supposed to finish in limited time but they get often stuck in last mapper 
 working on large region.
 Can we implement this method ? 
 What is the best way ?
 We were thinking about estimating size by size of files on HDFS.
 We would like to get Scanner from TableSplit, use startRow, stopRow and 
 column families to get corresponding region than computing size of HDFS for 
 given region and column family. 
 Update:
 This ticket was about production issue - I talked with guy who worked on this 
 and he said our production issue was probably not directly caused by 
 getLength() returning 0. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890563#comment-13890563
 ] 

Hadoop QA commented on HBASE-10447:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626849/HBASE-10447_trunk_1.patch
  against trunk revision .
  ATTACHMENT ID: 12626849

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8589//console

This message is automatically generated.

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890591#comment-13890591
 ] 

Hudson commented on HBASE-10319:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #8 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/8/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564249)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10463) Filter on columns containing numerics yield wrong results

2014-02-04 Thread Deepa Vasanthkumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890597#comment-13890597
 ] 

Deepa Vasanthkumar commented on HBASE-10463:


Okies...let me try that.

 Filter on columns containing numerics yield wrong results
 -

 Key: HBASE-10463
 URL: https://issues.apache.org/jira/browse/HBASE-10463
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.8
Reporter: Deepa Vasanthkumar
   Original Estimate: 168h
  Remaining Estimate: 168h

 Used SingleColumnValueFilter with CompareFilter.CompareOp.GREATER_OR_EQUAL 
 for filtering the scan result. 
 However the columns which have numeric value, scan result is not correct, 
 because of lexicographic comparison.
 Does HBase support numeric value filters (for equal, greater or equal..) for 
 columns ? If not, can we add it?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890598#comment-13890598
 ] 

Hudson commented on HBASE-10319:


SUCCESS: Integrated in HBase-0.94-security #398 (See 
[https://builds.apache.org/job/HBase-0.94-security/398/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564249)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10463) Filter on columns containing numerics yield wrong results

2014-02-04 Thread Jayesh Janardhanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890601#comment-13890601
 ] 

Jayesh Janardhanan commented on HBASE-10463:


I am working on a similar issue. will submit a patch shortly

 Filter on columns containing numerics yield wrong results
 -

 Key: HBASE-10463
 URL: https://issues.apache.org/jira/browse/HBASE-10463
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.8
Reporter: Deepa Vasanthkumar
   Original Estimate: 168h
  Remaining Estimate: 168h

 Used SingleColumnValueFilter with CompareFilter.CompareOp.GREATER_OR_EQUAL 
 for filtering the scan result. 
 However the columns which have numeric value, scan result is not correct, 
 because of lexicographic comparison.
 Does HBase support numeric value filters (for equal, greater or equal..) for 
 columns ? If not, can we add it?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890620#comment-13890620
 ] 

Hudson commented on HBASE-10319:


SUCCESS: Integrated in hbase-0.96 #279 (See 
[https://builds.apache.org/job/hbase-0.96/279/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564242)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890622#comment-13890622
 ] 

Hudson commented on HBASE-10319:


SUCCESS: Integrated in hbase-0.96-hadoop2 #192 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/192/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564242)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890623#comment-13890623
 ] 

Hudson commented on HBASE-10319:


SUCCESS: Integrated in HBase-TRUNK #4879 (See 
[https://builds.apache.org/job/HBase-TRUNK/4879/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564237)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890644#comment-13890644
 ] 

Hudson commented on HBASE-10319:


SUCCESS: Integrated in HBase-0.94 #1271 (See 
[https://builds.apache.org/job/HBase-0.94/1271/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564249)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890641#comment-13890641
 ] 

Hudson commented on HBASE-10319:


SUCCESS: Integrated in HBase-0.98 #125 (See 
[https://builds.apache.org/job/HBase-0.98/125/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564239)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10461) table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in finally block

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890643#comment-13890643
 ] 

Hudson commented on HBASE-10461:


SUCCESS: Integrated in HBase-0.98 #125 (See 
[https://builds.apache.org/job/HBase-0.98/125/])
HBASE-10461 table.close() in TableEventHandler#reOpenAllRegions() should be 
enclosed in finally block (tedyu: rev 1564183)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java


 table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in 
 finally block
 -

 Key: HBASE-10461
 URL: https://issues.apache.org/jira/browse/HBASE-10461
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: 10461-v1.txt


 If table.getRegionLocations() throws exception, table.close() would be 
 skipped, leaking the underlying resource



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890655#comment-13890655
 ] 

Hudson commented on HBASE-10319:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #117 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/117/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564239)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10455) cleanup InterruptedException management

2014-02-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10455:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk  .98, thanks for the review!

 cleanup InterruptedException management
 ---

 Key: HBASE-10455
 URL: https://issues.apache.org/jira/browse/HBASE-10455
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10455.v1.patch


 4 changes in this code:
 1) When caught and rethrowed as a IOException we always rethrow 
 InterruptedIOException
 2) When we were both throwing an exception AND resetting the interrupt status 
 we only throw an exception now.
 3) When we were trying to reset the status by Thread.interrupted() (which 
 does not do that), we now do it for real with a 
 Thread.currentThread.interrupt().
 4) Sometimes, we were rethrowing something else then InterruptedIOException, 
 while the contract would have allowed it. I've changed this as well.
 This patch does not make means that we're fine when we're interrupted, but 
 we're globally cleaner at least. I will then create other patches specific to 
 some parts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890723#comment-13890723
 ] 

Hudson commented on HBASE-10319:


FAILURE: Integrated in HBase-0.94-JDK7 #37 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/37/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564249)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10455) cleanup InterruptedException management

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890722#comment-13890722
 ] 

Hudson commented on HBASE-10455:


SUCCESS: Integrated in HBase-TRUNK #4880 (See 
[https://builds.apache.org/job/HBase-TRUNK/4880/])
HBASE-10455 cleanup InterruptedException management (nkeywal: rev 1564241)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableLockManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


 cleanup InterruptedException management
 ---

 Key: HBASE-10455
 URL: https://issues.apache.org/jira/browse/HBASE-10455
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10455.v1.patch


 4 changes in this code:
 1) When caught and rethrowed as a IOException we always rethrow 
 InterruptedIOException
 2) When we were both throwing an exception AND resetting the interrupt status 
 we only throw an exception now.
 3) When we were trying to reset the status by Thread.interrupted() (which 
 does not do that), we now do it for real with a 
 Thread.currentThread.interrupt().
 4) Sometimes, we were rethrowing something else then InterruptedIOException, 
 while the contract would have allowed it. I've changed this as well.
 This patch does not make means that we're fine when we're interrupted, but 
 we're globally cleaner at least. I will then create other patches specific to 
 some parts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10455) cleanup InterruptedException management

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890781#comment-13890781
 ] 

Hudson commented on HBASE-10455:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #118 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/118/])
HBASE-10455 cleanup InterruptedException management (nkeywal: rev 1564248)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableLockManager.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


 cleanup InterruptedException management
 ---

 Key: HBASE-10455
 URL: https://issues.apache.org/jira/browse/HBASE-10455
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10455.v1.patch


 4 changes in this code:
 1) When caught and rethrowed as a IOException we always rethrow 
 InterruptedIOException
 2) When we were both throwing an exception AND resetting the interrupt status 
 we only throw an exception now.
 3) When we were trying to reset the status by Thread.interrupted() (which 
 does not do that), we now do it for real with a 
 Thread.currentThread.interrupt().
 4) Sometimes, we were rethrowing something else then InterruptedIOException, 
 while the contract would have allowed it. I've changed this as well.
 This patch does not make means that we're fine when we're interrupted, but 
 we're globally cleaner at least. I will then create other patches specific to 
 some parts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10455) cleanup InterruptedException management

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890796#comment-13890796
 ] 

Hudson commented on HBASE-10455:


FAILURE: Integrated in HBase-0.98 #126 (See 
[https://builds.apache.org/job/HBase-0.98/126/])
HBASE-10455 cleanup InterruptedException management (nkeywal: rev 1564248)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableLockManager.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


 cleanup InterruptedException management
 ---

 Key: HBASE-10455
 URL: https://issues.apache.org/jira/browse/HBASE-10455
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10455.v1.patch


 4 changes in this code:
 1) When caught and rethrowed as a IOException we always rethrow 
 InterruptedIOException
 2) When we were both throwing an exception AND resetting the interrupt status 
 we only throw an exception now.
 3) When we were trying to reset the status by Thread.interrupted() (which 
 does not do that), we now do it for real with a 
 Thread.currentThread.interrupt().
 4) Sometimes, we were rethrowing something else then InterruptedIOException, 
 while the contract would have allowed it. I've changed this as well.
 This patch does not make means that we're fine when we're interrupted, but 
 we're globally cleaner at least. I will then create other patches specific to 
 some parts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10454) Tags presence file info can be wrong in HFiles when PrefixTree encoding is used

2014-02-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890813#comment-13890813
 ] 

Andrew Purtell commented on HBASE-10454:


+1 v2

 Tags presence file info can be wrong in HFiles when PrefixTree encoding is 
 used
 ---

 Key: HBASE-10454
 URL: https://issues.apache.org/jira/browse/HBASE-10454
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Fix For: 0.98.0

 Attachments: HBASE-10454.patch, HBASE-10454_V2.patch


 We always encode tags in case of Prefix tree now and so the code path, while 
 decoding, not checking what is there in FileInfo. So functionally no issues 
 now.
 If we do HBASE-10453 this change will be very important to make sure BC for 
 old files.
 We use the file info MAX_TAGS_LEN to know the presence of tags in a file. In 
 case of prefix tree we always have tags in files even if all kvs have 0 tags. 
  So better we can add this file info into prefix tree encoded HFiles.  Now it 
 get missed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890815#comment-13890815
 ] 

Andrew Purtell commented on HBASE-10447:


v1 patch lgtm

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Ding Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890844#comment-13890844
 ] 

Ding Yuan commented on HBASE-10452:
---

Thanks Ram! Attaching a patch against trunk. I did not fix case 7 since it 
requires using HBCK to clear the node. I have little expertise in HBase code 
base and any of my attempt in this case is likely to do more harm than good. 
Again, any comment is much appreciated!

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 

[jira] [Updated] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Ding Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Yuan updated HBASE-10452:
--

Attachment: HBase-10452-trunk.patch

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 {noformat}
 Any exception encountered in close is completely ignored, not even logged.
 In particular, the same exception scenario was handled differently in other 
 methods in the same file:
 Line: 154, same file
 {noformat}
if (in != null){
  try {

[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and 0.98.  Thanks all for the reviews.

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10454) Tags presence file info can be wrong in HFiles when PrefixTree encoding is used

2014-02-04 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10454:
---

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.98 and Trunk.  Thanks for the reviews

 Tags presence file info can be wrong in HFiles when PrefixTree encoding is 
 used
 ---

 Key: HBASE-10454
 URL: https://issues.apache.org/jira/browse/HBASE-10454
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10454.patch, HBASE-10454_V2.patch


 We always encode tags in case of Prefix tree now and so the code path, while 
 decoding, not checking what is there in FileInfo. So functionally no issues 
 now.
 If we do HBASE-10453 this change will be very important to make sure BC for 
 old files.
 We use the file info MAX_TAGS_LEN to know the presence of tags in a file. In 
 case of prefix tree we always have tags in files even if all kvs have 0 tags. 
  So better we can add this file info into prefix tree encoded HFiles.  Now it 
 get missed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10452:
---

Status: Patch Available  (was: Open)

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 {noformat}
 Any exception encountered in close is completely ignored, not even logged.
 In particular, the same exception scenario was handled differently in other 
 methods in the same file:
 Line: 154, same file
 {noformat}
if (in != null){
  try {
in.close();
  

[jira] [Commented] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890916#comment-13890916
 ] 

Ted Yu commented on HBASE-10452:


nit:
{code}
+  throw new RuntimeException(TimeRange failed, likely integer overflow. 
Preventing bad things to propagate.. , e);
{code}
Please wrap long line - 100 characters limit.


 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 {noformat}
 Any exception encountered in close is completely ignored, not even logged.
 In 

[jira] [Commented] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890939#comment-13890939
 ] 

Hadoop QA commented on HBASE-10452:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626898/HBase-10452-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12626898

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.
Here is snippet of errors:
{code}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java:[258,12]
 cannot find symbol
[ERROR] symbol  : class ReflectiveOperationException
[ERROR] location: class 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java:[277,12]
 cannot find symbol
[ERROR] symbol  : class ReflectiveOperationException
--
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-server: Compilation failure
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
--
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729)
at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:128)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more{code}

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8590//console

This message is automatically generated.

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = 

[jira] [Commented] (HBASE-10455) cleanup InterruptedException management

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890946#comment-13890946
 ] 

Hudson commented on HBASE-10455:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #78 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/78/])
HBASE-10455 cleanup InterruptedException management (nkeywal: rev 1564241)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableLockManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java


 cleanup InterruptedException management
 ---

 Key: HBASE-10455
 URL: https://issues.apache.org/jira/browse/HBASE-10455
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10455.v1.patch


 4 changes in this code:
 1) When caught and rethrowed as a IOException we always rethrow 
 InterruptedIOException
 2) When we were both throwing an exception AND resetting the interrupt status 
 we only throw an exception now.
 3) When we were trying to reset the status by Thread.interrupted() (which 
 does not do that), we now do it for real with a 
 Thread.currentThread.interrupt().
 4) Sometimes, we were rethrowing something else then InterruptedIOException, 
 while the contract would have allowed it. I've changed this as well.
 This patch does not make means that we're fine when we're interrupted, but 
 we're globally cleaner at least. I will then create other patches specific to 
 some parts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10461) table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in finally block

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890945#comment-13890945
 ] 

Hudson commented on HBASE-10461:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #78 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/78/])
HBASE-10461 table.close() in TableEventHandler#reOpenAllRegions() should be 
enclosed in finally block (tedyu: rev 1564184)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java


 table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in 
 finally block
 -

 Key: HBASE-10461
 URL: https://issues.apache.org/jira/browse/HBASE-10461
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: 10461-v1.txt


 If table.getRegionLocations() throws exception, table.close() would be 
 skipped, leaking the underlying resource



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890944#comment-13890944
 ] 

Hudson commented on HBASE-10319:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #78 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/78/])
HBASE-10319 HLog should roll periodically to allow DN decommission to 
eventually complete (mbertozzi: rev 1564237)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java


 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10277) refactor AsyncProcess

2014-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10277:
-

Attachment: HBASE-10277.05.patch

Minor feedback from RB, rebase the patch.
Wrt moving error management for streaming mode out of AsyncProcess, for example 
into HTable, I think it might hurt performance because HTable will have to 
manage all these objects. Let me try to think if this can be avoided. 

Patch still ready -for +1- er, I mean for review :)

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.01.patch, HBASE-10277.02.patch, 
 HBASE-10277.03.patch, HBASE-10277.04.patch, HBASE-10277.05.patch, 
 HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10463) Filter on columns containing numerics yield wrong results

2014-02-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890960#comment-13890960
 ] 

Nick Dimiduk commented on HBASE-10463:
--

FYI, 0.96+ includes the OrderedBytes encoding format. Values encoded using 
those methods are intended to order according to their natural order. Thus, the 
lexicographical byte-based comparators should work with them directly.

 Filter on columns containing numerics yield wrong results
 -

 Key: HBASE-10463
 URL: https://issues.apache.org/jira/browse/HBASE-10463
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.8
Reporter: Deepa Vasanthkumar
   Original Estimate: 168h
  Remaining Estimate: 168h

 Used SingleColumnValueFilter with CompareFilter.CompareOp.GREATER_OR_EQUAL 
 for filtering the scan result. 
 However the columns which have numeric value, scan result is not correct, 
 because of lexicographic comparison.
 Does HBase support numeric value filters (for equal, greater or equal..) for 
 columns ? If not, can we add it?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread bharath v (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890989#comment-13890989
 ] 

bharath v commented on HBASE-10457:
---

Sorry, confused between showFiles and showStats option. Attached a new version 
(v1) of patch with proposed changes.

 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread bharath v (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bharath v updated HBASE-10457:
--

Attachment: HBASE-10457-trunk-v1.patch

 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10104) test-patch.sh doesn't need to test compilation against hadoop 1.0

2014-02-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10104:
---

Resolution: Later
Status: Resolved  (was: Patch Available)

 test-patch.sh doesn't need to test compilation against hadoop 1.0
 -

 Key: HBASE-10104
 URL: https://issues.apache.org/jira/browse/HBASE-10104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 10104-v1.txt


 test-patch.sh performs compilation check against hadoop 1.0 and 1.1
 The compilation against hadoop 1.0 can be skipped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10337:


Attachment: 10337.v1.patch

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.94.9, 0.99.0, 0.96.1.1
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Attachments: 10337.v1.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10337:


Fix Version/s: 0.99.0
   0.98.0
   Status: Patch Available  (was: Open)

Hum. While I think the code is globally correct, I'm interested by getting 
reviews on this. Mixing interrupt with synchronous i/o and asynchronous i/o is 
not trivial.

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.1.1, 0.94.9, 0.98.0, 0.99.0
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890999#comment-13890999
 ] 

Hudson commented on HBASE-10447:


SUCCESS: Integrated in HBase-TRUNK #4881 (See 
[https://builds.apache.org/job/HBase-TRUNK/4881/])
HBASE-10447-Memstore flusher scans storefiles also when the scanner heap gets 
resetKVHeap(Ram) (ramkrishna: rev 1564377)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891002#comment-13891002
 ] 

Ted Yu commented on HBASE-10337:


{code}
+  if (curRetries = maxRetries || ExceptionUtil.isInterrupt(ioe)) {
 throw ioe;
{code}
Can InterruptedIOException be thrown in case of interrupt ?

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.94.9, 0.99.0, 0.96.1.1
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10389:
-

Status: Open  (was: Patch Available)

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.1, 0.96.0
Reporter: Jerry He
Assignee: Jerry He
 Attachments: HBASE-10389-trunk.patch


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10389:
-

Attachment: (was: HBASE-10389-trunk.patch)

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.0, 0.96.1
Reporter: Jerry He
Assignee: Jerry He

 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10389:
-

Attachment: HBASE-10389-trunk.patch

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.0, 0.96.1
Reporter: Jerry He
Assignee: Jerry He
 Attachments: HBASE-10389-trunk.patch


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891021#comment-13891021
 ] 

Matteo Bertozzi commented on HBASE-10457:
-

+1 looks good to me 
Thanks for the patch!

 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10389:
-

Status: Patch Available  (was: Open)

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.1, 0.96.0
Reporter: Jerry He
Assignee: Jerry He
 Attachments: HBASE-10389-trunk.patch


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891017#comment-13891017
 ] 

Jerry He commented on HBASE-10389:
--

Re-formatted the patch, and re-attached it.

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.0, 0.96.1
Reporter: Jerry He
Assignee: Jerry He
 Attachments: HBASE-10389-trunk.patch


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10457:


Fix Version/s: 0.94.18
   0.96.2
   0.98.0

 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10457:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Ding Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Yuan updated HBASE-10452:
--

Attachment: HBase-10452-trunk-v2.patch

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk-v2.patch, HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 {noformat}
 Any exception encountered in close is completely ignored, not even logged.
 In particular, the same exception scenario was handled differently in other 
 methods in the same file:
 Line: 154, same file
 {noformat}
if (in != null){
  

[jira] [Commented] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Ding Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891109#comment-13891109
 ] 

Ding Yuan commented on HBASE-10452:
---

Thanks Ted! Fixed both the line wrapping and the compilation error on java 6.  

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk-v2.patch, HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 {noformat}
 Any exception encountered in close is completely ignored, not even logged.
 In particular, the same exception scenario was handled differently in other 

[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891116#comment-13891116
 ] 

Hadoop QA commented on HBASE-10277:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626923/HBASE-10277.05.patch
  against trunk revision .
  ATTACHMENT ID: 12626923

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  // This action failed before creating ars. Add it to retained 
but do not add to submit list.
+ Retrying. Server is  + server.getServerName() + , 
tableName= + tableName, t);
+ Throwable error, long backOffTime, boolean 
willRetry, String startTime){

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8591//console

This message is automatically generated.

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.01.patch, HBASE-10277.02.patch, 
 HBASE-10277.03.patch, HBASE-10277.04.patch, HBASE-10277.05.patch, 
 HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call 

[jira] [Updated] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

2014-02-04 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10319:
--

Fix Version/s: (was: 0.94.18)
   0.94.17

 HLog should roll periodically to allow DN decommission to eventually complete.
 --

 Key: HBASE-10319
 URL: https://issues.apache.org/jira/browse/HBASE-10319
 Project: HBase
  Issue Type: Bug
Reporter: Jonathan Hsieh
Assignee: Matteo Bertozzi
 Fix For: 0.98.0, 0.96.2, 0.94.17

 Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch


 We encountered a situation where we had an esseitially read only table and 
 attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
 there are open blocks being written to currently on it.  Because the hbase 
 Hlog file was open, had some data (hlog header), the DN could not 
 decommission itself.  Since no new data is ever written, the existing 
 periodic check is not activated.
 After discussing with [~atm], it seems that although an hdfs semantics change 
 would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
 the client would roll over) this would take much more effort than having 
 hbase periodically force a log roll.  This would enable the hdfs dn con 
 complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891126#comment-13891126
 ] 

Lars Hofhansl commented on HBASE-10447:
---

Probably needs to go into 0.96. [~stack]

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891141#comment-13891141
 ] 

Hudson commented on HBASE-10447:


SUCCESS: Integrated in HBase-0.98 #127 (See 
[https://builds.apache.org/job/HBase-0.98/127/])
HBASE-10447-Memstore flusher scans storefiles also when the scanner heap gets 
resetKVHeap(Ram) (ramkrishna: rev 1564378)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891147#comment-13891147
 ] 

Hadoop QA commented on HBASE-10457:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626928/HBASE-10457-trunk-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12626928

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8592//console

This message is automatically generated.

 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-02-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891152#comment-13891152
 ] 

Nicolas Liochon commented on HBASE-10277:
-

I published the review. My comments are cosmetic mainly, we discussed all the 
main points previously. I'm +1 (with my comments taken into account obviously 
;- ). Thanks, Sergey.

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.01.patch, HBASE-10277.02.patch, 
 HBASE-10277.03.patch, HBASE-10277.04.patch, HBASE-10277.05.patch, 
 HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891157#comment-13891157
 ] 

Nicolas Liochon commented on HBASE-10337:
-

bq. Can InterruptedIOException be thrown in case of interrupt ?
Yes, that's the intend.

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.94.9, 0.99.0, 0.96.1.1
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891177#comment-13891177
 ] 

Hudson commented on HBASE-10447:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #119 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/119/])
HBASE-10447-Memstore flusher scans storefiles also when the scanner heap gets 
resetKVHeap(Ram) (ramkrishna: rev 1564378)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch, 
 HBASE-10447_trunk_1.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891183#comment-13891183
 ] 

Hadoop QA commented on HBASE-10337:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626931/10337.v1.patch
  against trunk revision .
  ATTACHMENT ID: 12626931

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestCheckTestClasses

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8593//console

This message is automatically generated.

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.94.9, 0.99.0, 0.96.1.1
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10337:


Attachment: 10337.v2.patch

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.94.9, 0.99.0, 0.96.1.1
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch, 10337.v2.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10337:


Status: Open  (was: Patch Available)

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.1.1, 0.94.9, 0.98.0, 0.99.0
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch, 10337.v2.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891190#comment-13891190
 ] 

Nicolas Liochon commented on HBASE-10337:
-

v2 == with the test category

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.94.9, 0.99.0, 0.96.1.1
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch, 10337.v2.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10337) HTable.get() uninteruptible

2014-02-04 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10337:


Status: Patch Available  (was: Open)

 HTable.get() uninteruptible
 ---

 Key: HBASE-10337
 URL: https://issues.apache.org/jira/browse/HBASE-10337
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.96.1.1, 0.94.9, 0.98.0, 0.99.0
Reporter: Jonathan Leech
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.99.0

 Attachments: 10337.v1.patch, 10337.v2.patch


 I've got a stuck thread on HTable.get() that can't be interrupted, looks like 
 its designed to be interruptible but can't be in interrupted in practice due 
 to while loop.
 The offending code is in org.apache.hadoop.hbase.ipc.HBaseClient.call() line 
 981, it catches InterruptedException then goes right back to waiting due to 
 the while loop.
 It looks like future versions of the client (.95+) are significantly 
 different and might not have this problem... Not sure about release schedules 
 etc. or if this version is still getting patched.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891194#comment-13891194
 ] 

Hudson commented on HBASE-10457:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #9 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/9/])
HBASE-10457 Print corrupted file information in SnapshotInfo tool without -file 
option (Bharath Vissapragada) (mbertozzi: rev 1564429)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10454) Tags presence file info can be wrong in HFiles when PrefixTree encoding is used

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891211#comment-13891211
 ] 

Hudson commented on HBASE-10454:


SUCCESS: Integrated in HBase-TRUNK #4882 (See 
[https://builds.apache.org/job/HBase-TRUNK/4882/])
HBASE-10454 Tags presence file info can be wrong in HFiles when PrefixTree 
encoding is used. (anoopsamjohn: rev 1564384)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java


 Tags presence file info can be wrong in HFiles when PrefixTree encoding is 
 used
 ---

 Key: HBASE-10454
 URL: https://issues.apache.org/jira/browse/HBASE-10454
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10454.patch, HBASE-10454_V2.patch


 We always encode tags in case of Prefix tree now and so the code path, while 
 decoding, not checking what is there in FileInfo. So functionally no issues 
 now.
 If we do HBASE-10453 this change will be very important to make sure BC for 
 old files.
 We use the file info MAX_TAGS_LEN to know the presence of tags in a file. In 
 case of prefix tree we always have tags in files even if all kvs have 0 tags. 
  So better we can add this file info into prefix tree encoded HFiles.  Now it 
 get missed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891259#comment-13891259
 ] 

Hudson commented on HBASE-10457:


SUCCESS: Integrated in HBase-0.94-security #399 (See 
[https://builds.apache.org/job/HBase-0.94-security/399/])
HBASE-10457 Print corrupted file information in SnapshotInfo tool without -file 
option (Bharath Vissapragada) (mbertozzi: rev 1564429)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10464) Race condition during RS shutdown that could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)
Yunfan Zhong created HBASE-10464:


 Summary: Race condition during RS shutdown that could cause data 
loss
 Key: HBASE-10464
 URL: https://issues.apache.org/jira/browse/HBASE-10464
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb


Bug scenario (T* are timestamps, say T1  T2  ...  Tn):
1. Master assigns a region to RS at T1
2. RS works on opening the region during T1 to T3
3. In the mean time of opening the region, RS starts to shut down at T2, and 
dfs client is closed at T5.
4. Regions owned by the RS get closed as a step of RS shutdown except that the 
newly opened region is online during T3 to T5 and holds some mutations in 
memory after possible last flush T4.
5. Since master thinks RS has a clean shutdown, there is no log splitting. The 
HLog was moved to old logs directory naturally.
6. Mutations in memory between T4 to T5 (if T4 does not exist, T3 to T5) are 
not flushed. They only exist in WAL if it is turned on.

Fix is to prevent region opening from succeeding when the RS is shutting down.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10464) Race condition during RS shutdown that could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunfan Zhong updated HBASE-10464:
-

Status: Patch Available  (was: Open)

 Race condition during RS shutdown that could cause data loss
 

 Key: HBASE-10464
 URL: https://issues.apache.org/jira/browse/HBASE-10464
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb


 Bug scenario (T* are timestamps, say T1  T2  ...  Tn):
 1. Master assigns a region to RS at T1
 2. RS works on opening the region during T1 to T3
 3. In the mean time of opening the region, RS starts to shut down at T2, and 
 dfs client is closed at T5.
 4. Regions owned by the RS get closed as a step of RS shutdown except that 
 the newly opened region is online during T3 to T5 and holds some mutations in 
 memory after possible last flush T4.
 5. Since master thinks RS has a clean shutdown, there is no log splitting. 
 The HLog was moved to old logs directory naturally.
 6. Mutations in memory between T4 to T5 (if T4 does not exist, T3 to T5) are 
 not flushed. They only exist in WAL if it is turned on.
 Fix is to prevent region opening from succeeding when the RS is shutting down.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10464) Race condition during RS shutdown that could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunfan Zhong updated HBASE-10464:
-

Status: Open  (was: Patch Available)

 Race condition during RS shutdown that could cause data loss
 

 Key: HBASE-10464
 URL: https://issues.apache.org/jira/browse/HBASE-10464
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb


 Bug scenario (T* are timestamps, say T1  T2  ...  Tn):
 1. Master assigns a region to RS at T1
 2. RS works on opening the region during T1 to T3
 3. In the mean time of opening the region, RS starts to shut down at T2, and 
 dfs client is closed at T5.
 4. Regions owned by the RS get closed as a step of RS shutdown except that 
 the newly opened region is online during T3 to T5 and holds some mutations in 
 memory after possible last flush T4.
 5. Since master thinks RS has a clean shutdown, there is no log splitting. 
 The HLog was moved to old logs directory naturally.
 6. Mutations in memory between T4 to T5 (if T4 does not exist, T3 to T5) are 
 not flushed. They only exist in WAL if it is turned on.
 Fix is to prevent region opening from succeeding when the RS is shutting down.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10464) Race condition during RS shutdown that could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunfan Zhong updated HBASE-10464:
-

Attachment: D1120497.diff

 Race condition during RS shutdown that could cause data loss
 

 Key: HBASE-10464
 URL: https://issues.apache.org/jira/browse/HBASE-10464
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb

 Attachments: D1120497.diff


 Bug scenario (T* are timestamps, say T1  T2  ...  Tn):
 1. Master assigns a region to RS at T1
 2. RS works on opening the region during T1 to T3
 3. In the mean time of opening the region, RS starts to shut down at T2, and 
 dfs client is closed at T5.
 4. Regions owned by the RS get closed as a step of RS shutdown except that 
 the newly opened region is online during T3 to T5 and holds some mutations in 
 memory after possible last flush T4.
 5. Since master thinks RS has a clean shutdown, there is no log splitting. 
 The HLog was moved to old logs directory naturally.
 6. Mutations in memory between T4 to T5 (if T4 does not exist, T3 to T5) are 
 not flushed. They only exist in WAL if it is turned on.
 Fix is to prevent region opening from succeeding when the RS is shutting down.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10464) Race condition during RS shutdown that could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunfan Zhong updated HBASE-10464:
-

Status: Patch Available  (was: Open)

 Race condition during RS shutdown that could cause data loss
 

 Key: HBASE-10464
 URL: https://issues.apache.org/jira/browse/HBASE-10464
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb

 Attachments: D1120497.diff


 Bug scenario (T* are timestamps, say T1  T2  ...  Tn):
 1. Master assigns a region to RS at T1
 2. RS works on opening the region during T1 to T3
 3. In the mean time of opening the region, RS starts to shut down at T2, and 
 dfs client is closed at T5.
 4. Regions owned by the RS get closed as a step of RS shutdown except that 
 the newly opened region is online during T3 to T5 and holds some mutations in 
 memory after possible last flush T4.
 5. Since master thinks RS has a clean shutdown, there is no log splitting. 
 The HLog was moved to old logs directory naturally.
 6. Mutations in memory between T4 to T5 (if T4 does not exist, T3 to T5) are 
 not flushed. They only exist in WAL if it is turned on.
 Fix is to prevent region opening from succeeding when the RS is shutting down.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10464) Race condition during RS shutdown that could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunfan Zhong updated HBASE-10464:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Race condition during RS shutdown that could cause data loss
 

 Key: HBASE-10464
 URL: https://issues.apache.org/jira/browse/HBASE-10464
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb

 Attachments: D1120497.diff


 Bug scenario (T* are timestamps, say T1  T2  ...  Tn):
 1. Master assigns a region to RS at T1
 2. RS works on opening the region during T1 to T3
 3. In the mean time of opening the region, RS starts to shut down at T2, and 
 dfs client is closed at T5.
 4. Regions owned by the RS get closed as a step of RS shutdown except that 
 the newly opened region is online during T3 to T5 and holds some mutations in 
 memory after possible last flush T4.
 5. Since master thinks RS has a clean shutdown, there is no log splitting. 
 The HLog was moved to old logs directory naturally.
 6. Mutations in memory between T4 to T5 (if T4 does not exist, T3 to T5) are 
 not flushed. They only exist in WAL if it is turned on.
 Fix is to prevent region opening from succeeding when the RS is shutting down.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891277#comment-13891277
 ] 

Hudson commented on HBASE-10457:


SUCCESS: Integrated in HBase-0.94 #1272 (See 
[https://builds.apache.org/job/HBase-0.94/1272/])
HBASE-10457 Print corrupted file information in SnapshotInfo tool without -file 
option (Bharath Vissapragada) (mbertozzi: rev 1564429)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891279#comment-13891279
 ] 

Hadoop QA commented on HBASE-10389:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626937/HBASE-10389-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12626937

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8594//console

This message is automatically generated.

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.0, 0.96.1
Reporter: Jerry He
Assignee: Jerry He
 Attachments: HBASE-10389-trunk.patch


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into 

[jira] [Moved] (HBASE-10465) TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes

2014-02-04 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang moved HDFS-5883 to HBASE-10465:
---

Key: HBASE-10465  (was: HDFS-5883)
Project: HBase  (was: Hadoop HDFS)

 TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes
 ---

 Key: HBASE-10465
 URL: https://issues.apache.org/jira/browse/HBASE-10465
 Project: HBase
  Issue Type: Test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Trivial

 It looks like sleeping 100 ms is not enough for the permission change to 
 propagate to other watchers. Will increase the sleeping time a little.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891283#comment-13891283
 ] 

Hudson commented on HBASE-10457:


SUCCESS: Integrated in hbase-0.96 #280 (See 
[https://builds.apache.org/job/hbase-0.96/280/])
HBASE-10457 Print corrupted file information in SnapshotInfo tool without -file 
option (Bharath Vissapragada) (mbertozzi: rev 1564428)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10466) Wrong calculation of total memstore size in HRegion which could cause data loss

2014-02-04 Thread Yunfan Zhong (JIRA)
Yunfan Zhong created HBASE-10466:


 Summary: Wrong calculation of total memstore size in HRegion which 
could cause data loss
 Key: HBASE-10466
 URL: https://issues.apache.org/jira/browse/HBASE-10466
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.89-fb
Reporter: Yunfan Zhong
Priority: Critical
 Fix For: 0.89-fb


When there are failed flushes, data to be flush are kept in each MemStore's 
snapshot. Next flush attempt will continue on snapshot first. However, the 
counter of total memstore size in HRegion is always deduced by the sum of 
current memstore sizes after the flush succeeds. This calculation is definitely 
wrong if flush fails last time.
When the server is shutting down, there are two flushes. During the missing KV 
issue period, the first flush successfully saved data in snapshot. But the size 
counter was reduced to 0 or even less. This prevented the second flush since 
HRegion.internalFlushcache() directly returns while total memstore size is not 
greater than 0. As result, data in memstores were lost.
It could cause mass data loss up to the size limit of each memstore. For 
example, a region had 516.3M data (size limit is 516M) in memstore at the 
moment because of failing flushes for more than one hour. After the region was 
closed, these KVs were missing from HFiles but exist in HLog.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10465) TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes

2014-02-04 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10465:


Attachment: hbase-10465.patch

 TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes
 ---

 Key: HBASE-10465
 URL: https://issues.apache.org/jira/browse/HBASE-10465
 Project: HBase
  Issue Type: Test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Trivial
 Attachments: hbase-10465.patch


 It looks like sleeping 100 ms is not enough for the permission change to 
 propagate to other watchers. Will increase the sleeping time a little.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10465) TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes

2014-02-04 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10465:


Status: Patch Available  (was: Open)

 TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes
 ---

 Key: HBASE-10465
 URL: https://issues.apache.org/jira/browse/HBASE-10465
 Project: HBase
  Issue Type: Test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Trivial
 Attachments: hbase-10465.patch


 It looks like sleeping 100 ms is not enough for the permission change to 
 propagate to other watchers. Will increase the sleeping time a little.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10389) Add namespace help info in table related shell commands

2014-02-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891313#comment-13891313
 ] 

Jonathan Hsieh commented on HBASE-10389:


many spots: are we allowed to regex the namespace part of the tablename also or 
is it only the table part.?

{code}
diff --git a/hbase-shell/src/main/ruby/shell/commands/disable_all.rb 
b/hbase-shell/src/main/ruby/shell/commands/disable_all.rb
index 0e7c30e..1793a42 100644
--- a/hbase-shell/src/main/ruby/shell/commands/disable_all.rb
+++ b/hbase-shell/src/main/ruby/shell/commands/disable_all.rb
@@ -25,6 +25,8 @@ module Shell
 Disable all of tables matching the given regex:
 
 hbase disable_all 't.*'
+hbase disable_all 'ns:t.*'
+hbase disable_all 'ns:.*'
 EOF
{code}

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.0, 0.96.1
Reporter: Jerry He
Assignee: Jerry He
 Attachments: HBASE-10389-trunk.patch


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10452) Potential bugs in exception handlers

2014-02-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891326#comment-13891326
 ] 

Hadoop QA commented on HBASE-10452:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626957/HBase-10452-trunk-v2.patch
  against trunk revision .
  ATTACHMENT ID: 12626957

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8595//console

This message is automatically generated.

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan
 Attachments: HBase-10452-trunk-v2.patch, HBase-10452-trunk.patch


 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   

[jira] [Commented] (HBASE-10457) Print corrupted file information in SnapshotInfo tool without -file option

2014-02-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891325#comment-13891325
 ] 

Hudson commented on HBASE-10457:


FAILURE: Integrated in HBase-0.94-JDK7 #38 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/38/])
HBASE-10457 Print corrupted file information in SnapshotInfo tool without -file 
option (Bharath Vissapragada) (mbertozzi: rev 1564429)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Print corrupted file information in SnapshotInfo tool without -file option
 --

 Key: HBASE-10457
 URL: https://issues.apache.org/jira/browse/HBASE-10457
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: bharath v
Assignee: bharath v
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.94.18

 Attachments: HBASE-10457-trunk-v0.patch, HBASE-10457-trunk-v1.patch


 Currently SnapshotInfo tool prints the corrupted snapshot information only if 
 the user provides -file options. This might mislead the user sometimes. This 
 patch prints the corrupt files information even without the -file option.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10460) Return value of Scan#setSmall() should be void

2014-02-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891338#comment-13891338
 ] 

Jonathan Hsieh commented on HBASE-10460:


I think this should be in 0.98. Here are my two best reasons:

* This patch is a simple best effort fix (as [~lhofhansl] mentioned) and the 
current implementation seems to be a going out of the way to break things. I 
don't see how it this one is an API cleanup (in Java convention, setters return 
void) and is seems to be a gratuitous compat breaking change that is easy low 
risk fix.
* This method is in a class marked InterfaceAudience.Public and 
InterfaceStability.Stable.  [1]  If this were marked 
InterfaceStability.Evolving I'd be ok with the won't do resolution but 
because it is Stable we should keep this around. and allow this patch in 0.98.  

[1] 
https://github.com/apache/hbase/blob/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L82

 Return value of Scan#setSmall() should be void
 --

 Key: HBASE-10460
 URL: https://issues.apache.org/jira/browse/HBASE-10460
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0

 Attachments: 10460.txt


 Aleksandr Shulman reported the following incompatibility between 0.96 and 
 0.98 under the '[VOTE] The 1st HBase 0.98.0 release candidate is available 
 for download' thread:
 {code}
 Exception from testScanMeta:  - java.lang.NoSuchMethodError:
 org.apache.hadoop.hbase.client.Scan.setSmall(Z)V
 {code}
 Return value of Scan#setSmall() should be void



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (HBASE-10460) Return value of Scan#setSmall() should be void

2014-02-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh reopened HBASE-10460:



 Return value of Scan#setSmall() should be void
 --

 Key: HBASE-10460
 URL: https://issues.apache.org/jira/browse/HBASE-10460
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0

 Attachments: 10460.txt


 Aleksandr Shulman reported the following incompatibility between 0.96 and 
 0.98 under the '[VOTE] The 1st HBase 0.98.0 release candidate is available 
 for download' thread:
 {code}
 Exception from testScanMeta:  - java.lang.NoSuchMethodError:
 org.apache.hadoop.hbase.client.Scan.setSmall(Z)V
 {code}
 Return value of Scan#setSmall() should be void



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10460) Return value of Scan#setSmall() should be void

2014-02-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-10460:
---

Affects Version/s: 0.98.0
Fix Version/s: 0.98.0

 Return value of Scan#setSmall() should be void
 --

 Key: HBASE-10460
 URL: https://issues.apache.org/jira/browse/HBASE-10460
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0

 Attachments: 10460.txt


 Aleksandr Shulman reported the following incompatibility between 0.96 and 
 0.98 under the '[VOTE] The 1st HBase 0.98.0 release candidate is available 
 for download' thread:
 {code}
 Exception from testScanMeta:  - java.lang.NoSuchMethodError:
 org.apache.hadoop.hbase.client.Scan.setSmall(Z)V
 {code}
 Return value of Scan#setSmall() should be void



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10459) Broken link F.1. HBase Videos

2014-02-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-10459:
---

Assignee: Richard Shaw

 Broken link F.1. HBase Videos
 -

 Key: HBASE-10459
 URL: https://issues.apache.org/jira/browse/HBASE-10459
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Richard Shaw
Assignee: Richard Shaw
Priority: Trivial
  Labels: documentation
 Attachments: book_HBASE_10459.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Broken link to first introduction to HBase video [0]
 Second Introduction video works, so suspect a redirect at the other end is 
 broken or it's being changed in which case the second may stop working as 
 well.
 Have supplied a patch
 [0]https://hbase.apache.org/book.html#other.info.videos



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10459) Broken link F.1. HBase Videos

2014-02-04 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-10459:
---

 Tags:   (was: broken link)
   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committing to trunk. Thanks for the patch Richard!

 Broken link F.1. HBase Videos
 -

 Key: HBASE-10459
 URL: https://issues.apache.org/jira/browse/HBASE-10459
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Richard Shaw
Assignee: Richard Shaw
Priority: Trivial
  Labels: documentation
 Fix For: 0.99.0

 Attachments: book_HBASE_10459.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Broken link to first introduction to HBase video [0]
 Second Introduction video works, so suspect a redirect at the other end is 
 broken or it's being changed in which case the second may stop working as 
 well.
 Have supplied a patch
 [0]https://hbase.apache.org/book.html#other.info.videos



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10277) refactor AsyncProcess

2014-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10277:
-

Attachment: HBASE-10277.06.patch

Address Nicolas' feedback from RB, and one TODO, now AP is shared in 
HConnection, I added an option to have pool either in AP or in individual 
request

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.01.patch, HBASE-10277.02.patch, 
 HBASE-10277.03.patch, HBASE-10277.04.patch, HBASE-10277.05.patch, 
 HBASE-10277.06.patch, HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10460) Return value of Scan#setSmall() should be void

2014-02-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891415#comment-13891415
 ] 

Andrew Purtell commented on HBASE-10460:


Like I said above I closed this because it was predicated on an RC criteria 
that didn't exist. I have no objection to the change itself if people think it 
is a good idea. 

 Return value of Scan#setSmall() should be void
 --

 Key: HBASE-10460
 URL: https://issues.apache.org/jira/browse/HBASE-10460
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.0, 0.99.0

 Attachments: 10460.txt


 Aleksandr Shulman reported the following incompatibility between 0.96 and 
 0.98 under the '[VOTE] The 1st HBase 0.98.0 release candidate is available 
 for download' thread:
 {code}
 Exception from testScanMeta:  - java.lang.NoSuchMethodError:
 org.apache.hadoop.hbase.client.Scan.setSmall(Z)V
 {code}
 Return value of Scan#setSmall() should be void



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >