[jira] [Resolved] (HBASE-12438) Add -Dsurefire.rerunFailingTestsCount=2 to patch build runs so flakies get rerun

2014-11-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-12438.
---
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed

 Add  -Dsurefire.rerunFailingTestsCount=2 to patch build runs so flakies get 
 rerun
 -

 Key: HBASE-12438
 URL: https://issues.apache.org/jira/browse/HBASE-12438
 Project: HBase
  Issue Type: Task
  Components: test
Reporter: stack
Assignee: stack
 Fix For: 2.0.0

 Attachments: 12438.txt


 Tripped over this config today:
  -Dsurefire.rerunFailingTestsCount=
 I made a test fail, then pass, and I got this output:
 {code}
  Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Flakes: 1
 {code}
 Notice the 'Flakes' addition on the far-right.
 Let me enable this on hadoopqa builds. Hopefully will help make it so new 
 contribs are not frightened off by flakies thinking their patch the cause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12439) Procedure V2

2014-11-06 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-12439:
---

 Summary: Procedure V2
 Key: HBASE-12439
 URL: https://issues.apache.org/jira/browse/HBASE-12439
 Project: HBase
  Issue Type: New Feature
  Components: master
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Attachments: ProcedureV2.pdf

Procedure v2 (aka Notification Bus) aims to provide a unified way to build:
* multi-steps procedure with a rollback/rollforward ability in case of failure 
(e.g. create/delete table)
** HBASE-12070
* notifications across multiple machines (e.g. ACLs/Labels/Quotas cache updates)
** Make sure that every machine has the grant/revoke/label
** Enforce space limit quota across the namespace
** HBASE-10295 eliminate permanent replication zk node
* procedures across multiple machines (e.g. Snapshots)
* coordinated long-running procedures (e.g. compactions, splits, ...)
* Synchronous calls, with the ability to see the state/result in case of 
failure.
** HBASE-11608 sync split

still work in progress/initial prototype: https://reviews.apache.org/r/27703/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KeyValue#createFirstOnRow() removed in hbase branch-1

2014-11-06 Thread Nick Dimiduk
+dev@hbase

On Wednesday, November 5, 2014, Anoop John anoop.hb...@gmail.com wrote:

 I see Phoenix using this API in many places..  So how about add it back in
 branch-1 with deprecation and remove from trunk(?)  Might be some other
 users also have used it. Deprecate in one major version and remove in next
 major version seems better.

 -Anoop-

 On Thu, Nov 6, 2014 at 12:06 AM, Ted Yu yuzhih...@gmail.com
 javascript:; wrote:

  Hi,
  In hbase branch-1, KeyValue.createFirstOnRow() doesn't
  exist. KeyValueUtil.createFirstOnRow() replaces that method.
 
  I want to get opinion on how Phoenix should deal with such API
  compatibility issue now that hbase 1.0 release gets closer.
 
  Cheers
 



[jira] [Reopened] (HBASE-11788) hbase is not deleting the cell when a Put with a KeyValue, KeyValue.Type.Delete is submitted

2014-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-11788:


 hbase is not deleting the cell when a Put with a KeyValue, 
 KeyValue.Type.Delete is submitted
 

 Key: HBASE-11788
 URL: https://issues.apache.org/jira/browse/HBASE-11788
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0, 0.96.1.1, 0.98.5, 2.0.0
 Environment: Cloudera CDH 5.1.x
Reporter: Cristian Armaselu
Assignee: Srikanth Srungarapu
 Fix For: 0.99.0, 2.0.0, 0.98.6

 Attachments: HBASE-11788-master.patch, HBASE-11788-master_v2.patch, 
 TestPutAfterDeleteColumn.java, TestPutWithDelete.java


 Code executed:
 {code}
 @Test
 public void testHbasePutDeleteCell() throws Exception {
 TableName tableName = TableName.valueOf(my_test);
 Configuration configuration = HBaseConfiguration.create();
 HTableInterface table = new HTable(configuration, tableName);
 final String rowKey = 12345;
 final byte[] familly = Bytes.toBytes(default);
 // put one row
 Put put = new Put(Bytes.toBytes(rowKey));
 put.add(familly, Bytes.toBytes(A), Bytes.toBytes(a));
 put.add(familly, Bytes.toBytes(B), Bytes.toBytes(b));
 put.add(familly, Bytes.toBytes(C), Bytes.toBytes(c));
 table.put(put);
 // get row back and assert the values
 Get get = new Get(Bytes.toBytes(rowKey));
 Result result = table.get(get);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(A))).equals(a), Column A value should be a);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(B))).equals(b), Column B value should be b);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(C))).equals(c), Column C value should be c);
 // put the same row again with C column deleted
 put = new Put(Bytes.toBytes(rowKey));
 put.add(familly, Bytes.toBytes(A), Bytes.toBytes(a));
 put.add(familly, Bytes.toBytes(B), Bytes.toBytes(b));
 put.add(new KeyValue(Bytes.toBytes(rowKey), familly, 
 Bytes.toBytes(C), HConstants.LATEST_TIMESTAMP, KeyValue.Type.DeleteColumn));
 table.put(put);
 // get row back and assert the values
 get = new Get(Bytes.toBytes(rowKey));
 result = table.get(get);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(A))).equals(a), Column A value should be a);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(B))).equals(b), Column A value should be b);
 Assert.isTrue(result.getValue(familly, Bytes.toBytes(C)) == null, 
 Column C should not exists);
 }
 {code}
 This assertion fails, the cell is not deleted but rather the value is empty:
 {code}
 hbase(main):029:0 scan 'my_test'
 ROW   COLUMN+CELL 
   
   
  12345column=default:A, 
 timestamp=1408473082290, value=a  
 
  12345column=default:B, 
 timestamp=1408473082290, value=b  
 
  12345column=default:C, 
 timestamp=1408473082290, value=  
 {code}
 This behavior is different than previous 4.8.x Cloudera version and is 
 currently corrupting all hive queries involving is null or is not null 
 operators on the columns mapped to hbase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11788) hbase is not deleting the cell when a Put with a KeyValue, KeyValue.Type.Delete is submitted

2014-11-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-11788.

Resolution: Fixed

Actually I'm going to re-resolve. This went out in a release. We will need a 
new issue to address the issue raised above with fix version set to next 
unreleased version.

 hbase is not deleting the cell when a Put with a KeyValue, 
 KeyValue.Type.Delete is submitted
 

 Key: HBASE-11788
 URL: https://issues.apache.org/jira/browse/HBASE-11788
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0, 0.96.1.1, 0.98.5, 2.0.0
 Environment: Cloudera CDH 5.1.x
Reporter: Cristian Armaselu
Assignee: Srikanth Srungarapu
 Fix For: 2.0.0, 0.98.6, 0.99.0

 Attachments: HBASE-11788-master.patch, HBASE-11788-master_v2.patch, 
 TestPutAfterDeleteColumn.java, TestPutWithDelete.java


 Code executed:
 {code}
 @Test
 public void testHbasePutDeleteCell() throws Exception {
 TableName tableName = TableName.valueOf(my_test);
 Configuration configuration = HBaseConfiguration.create();
 HTableInterface table = new HTable(configuration, tableName);
 final String rowKey = 12345;
 final byte[] familly = Bytes.toBytes(default);
 // put one row
 Put put = new Put(Bytes.toBytes(rowKey));
 put.add(familly, Bytes.toBytes(A), Bytes.toBytes(a));
 put.add(familly, Bytes.toBytes(B), Bytes.toBytes(b));
 put.add(familly, Bytes.toBytes(C), Bytes.toBytes(c));
 table.put(put);
 // get row back and assert the values
 Get get = new Get(Bytes.toBytes(rowKey));
 Result result = table.get(get);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(A))).equals(a), Column A value should be a);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(B))).equals(b), Column B value should be b);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(C))).equals(c), Column C value should be c);
 // put the same row again with C column deleted
 put = new Put(Bytes.toBytes(rowKey));
 put.add(familly, Bytes.toBytes(A), Bytes.toBytes(a));
 put.add(familly, Bytes.toBytes(B), Bytes.toBytes(b));
 put.add(new KeyValue(Bytes.toBytes(rowKey), familly, 
 Bytes.toBytes(C), HConstants.LATEST_TIMESTAMP, KeyValue.Type.DeleteColumn));
 table.put(put);
 // get row back and assert the values
 get = new Get(Bytes.toBytes(rowKey));
 result = table.get(get);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(A))).equals(a), Column A value should be a);
 Assert.isTrue(Bytes.toString(result.getValue(familly, 
 Bytes.toBytes(B))).equals(b), Column A value should be b);
 Assert.isTrue(result.getValue(familly, Bytes.toBytes(C)) == null, 
 Column C should not exists);
 }
 {code}
 This assertion fails, the cell is not deleted but rather the value is empty:
 {code}
 hbase(main):029:0 scan 'my_test'
 ROW   COLUMN+CELL 
   
   
  12345column=default:A, 
 timestamp=1408473082290, value=a  
 
  12345column=default:B, 
 timestamp=1408473082290, value=b  
 
  12345column=default:C, 
 timestamp=1408473082290, value=  
 {code}
 This behavior is different than previous 4.8.x Cloudera version and is 
 currently corrupting all hive queries involving is null or is not null 
 operators on the columns mapped to hbase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12440) Region may remain offline on clean startup under certain race condition

2014-11-06 Thread Virag Kothari (JIRA)
Virag Kothari created HBASE-12440:
-

 Summary: Region may remain offline on clean startup under certain 
race condition
 Key: HBASE-12440
 URL: https://issues.apache.org/jira/browse/HBASE-12440
 Project: HBase
  Issue Type: Bug
Reporter: Virag Kothari
Assignee: Virag Kothari
 Fix For: 0.98.8, 0.99.1


Saw this in prod some time back with zk assignment
On clean startup, while master was doing bulk assign while one of the region 
servers dies. The bulk assigner then tried to assign it individually using 
AssignCallable. The AssignCallable does a forceStateToOffline() and skips 
assigning as it wants the SSH to do the assignment
{code}
2014-10-16 16:05:23,593 DEBUG master.AssignmentManager [AM.-pool1-t1] : Offline 
sieve_main:inlinks,com.cbslocal.seattle/photo-galleries/category/consumer///:http\x09com.cbslocal.seattle/photo-galleries/category/tailgate-fan///:http,1413464068567.1f1620174d2542fe7d5b034f3311c3a8.,
 no need to unassign since it's on a dead server: 
gsbl872n06.blue.ygrid.yahoo.com,50511,1413475494016
2014-10-16 16:05:23,593  INFO master.RegionStates [AM.-pool1-t1] : Transition 
{1f1620174d2542fe7d5b034f3311c3a8 state=PENDING_OPEN, ts=1413475519482, 
server=gsbl872n06.blue.ygrid.yahoo.com,50511,1413475494016} to 
{1f1620174d2542fe7d5b034f3311c3a8 state=OFFLINE, ts=1413475523593, 
server=gsbl872n06.blue.ygrid.yahoo.com,50511,1413475494016}
2014-10-16 16:05:23,598  INFO master.AssignmentManager [AM.-pool1-t1] : Skip 
assigning 
sieve_main:inlinks,com.cbslocal.seattle/photo-galleries/category/consumer///:http\x09com.cbslocal.seattle/photo-galleries/category/tailgate-fan///:http,1413464068567.1f1620174d2542fe7d5b034f3311c3a8.,
 it is on a dead but not processed yet server: 
gsbl872n06.blue.ygrid.yahoo.com,50511,1413475494016
{code}
But the SSH wont assign as the region is offline but not in transition
{code}
2014-10-16 16:05:24,606  INFO handler.ServerShutdownHandler 
[MASTER_SERVER_OPERATIONS-hbbl874n38:50510-0] : Reassigning 0 region(s) that 
gsbl872n06.blue.ygrid.yahoo.com,50511,1413475494016 was carrying (and 0 
regions(s) that were opening on this server)
2014-10-16 16:05:24,606 DEBUG master.DeadServer 
[MASTER_SERVER_OPERATIONS-hbbl874n38:50510-0] : Finished processing 
gsbl872n06.blue.ygrid.yahoo.com,50511,1413475494016
{code}

In zk-less assignment, the bulk assigner invoking AssignCallable and the SSH 
may try to assign the region. But as they go through lock, only one will 
succeed and doesn't seem to be an issue. 


 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12441) Export and CopyTable need to be able to keep tags/labels in cells

2014-11-06 Thread Jerry He (JIRA)
Jerry He created HBASE-12441:


 Summary: Export and CopyTable need to be able to keep tags/labels 
in cells
 Key: HBASE-12441
 URL: https://issues.apache.org/jira/browse/HBASE-12441
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, security
Reporter: Jerry He


Export and CopyTable (and possibly other MR tools) currently do not carry over 
tags/labels in cells.

These tools should be able keep tags/labels in cells when they back up the 
table cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KeyValue#createFirstOnRow() removed in hbase branch-1

2014-11-06 Thread ramkrishna vasudevan
Ok. +1 on  bringing it back with @deprecated tag in one major version and
removing it in the next major version.

Regards
Ram

On Fri, Nov 7, 2014 at 3:54 AM, Nick Dimiduk ndimi...@gmail.com wrote:

 +dev@hbase

 On Wednesday, November 5, 2014, Anoop John anoop.hb...@gmail.com wrote:

  I see Phoenix using this API in many places..  So how about add it back
 in
  branch-1 with deprecation and remove from trunk(?)  Might be some other
  users also have used it. Deprecate in one major version and remove in
 next
  major version seems better.
 
  -Anoop-
 
  On Thu, Nov 6, 2014 at 12:06 AM, Ted Yu yuzhih...@gmail.com
  javascript:; wrote:
 
   Hi,
   In hbase branch-1, KeyValue.createFirstOnRow() doesn't
   exist. KeyValueUtil.createFirstOnRow() replaces that method.
  
   I want to get opinion on how Phoenix should deal with such API
   compatibility issue now that hbase 1.0 release gets closer.
  
   Cheers
  
 



[jira] [Created] (HBASE-12442) Bring KeyValue#createFirstOnRow() back to branch-1 as deprecated methods

2014-11-06 Thread Ted Yu (JIRA)
Ted Yu created HBASE-12442:
--

 Summary: Bring KeyValue#createFirstOnRow() back to branch-1 as 
deprecated methods
 Key: HBASE-12442
 URL: https://issues.apache.org/jira/browse/HBASE-12442
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu


KeyValue.createFirstOnRow() methods are used by downstream projects such as 
Phoenix.
They haven't been deprecated in 0.98 branch.

This JIRA brings KeyValue.createFirstOnRow() back to branch as deprecated 
methods. They are removed in master branch (hbase 2.0)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KeyValue#createFirstOnRow() removed in hbase branch-1

2014-11-06 Thread Ted Yu
I logged HBASE-12442 and attached an initial patch there.

Thanks

On Thu, Nov 6, 2014 at 7:57 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Ok. +1 on  bringing it back with @deprecated tag in one major version and
 removing it in the next major version.

 Regards
 Ram

 On Fri, Nov 7, 2014 at 3:54 AM, Nick Dimiduk ndimi...@gmail.com wrote:

  +dev@hbase
 
  On Wednesday, November 5, 2014, Anoop John anoop.hb...@gmail.com
 wrote:
 
   I see Phoenix using this API in many places..  So how about add it back
  in
   branch-1 with deprecation and remove from trunk(?)  Might be some other
   users also have used it. Deprecate in one major version and remove in
  next
   major version seems better.
  
   -Anoop-
  
   On Thu, Nov 6, 2014 at 12:06 AM, Ted Yu yuzhih...@gmail.com
   javascript:; wrote:
  
Hi,
In hbase branch-1, KeyValue.createFirstOnRow() doesn't
exist. KeyValueUtil.createFirstOnRow() replaces that method.
   
I want to get opinion on how Phoenix should deal with such API
compatibility issue now that hbase 1.0 release gets closer.
   
Cheers
   
  
 



[jira] [Created] (HBASE-12443) After increasing the TTL value of a Hbase Table , table gets inaccessible. Scan table not working.

2014-11-06 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HBASE-12443:
-

 Summary: After increasing the TTL value of a Hbase Table , table 
gets inaccessible. Scan table not working.
 Key: HBASE-12443
 URL: https://issues.apache.org/jira/browse/HBASE-12443
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 2.0.0
Reporter: Prabhu Joseph
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12443) After increasing the TTL value of a Hbase Table , table gets inaccessible. Scan table not working.

2014-11-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-12443.
---
Resolution: Duplicate

Dup of HBASE-11419.
Please do file the same issue again. In HBASE-11419 you mention 0.94.6 being 
your version. Can you try with a later version of 0.94?
You can upgrade from 0.94.6 directly to 0.94.24 with a rolling restart.

 After increasing the TTL value of a Hbase Table , table gets inaccessible. 
 Scan table not working.
 --

 Key: HBASE-12443
 URL: https://issues.apache.org/jira/browse/HBASE-12443
 Project: HBase
  Issue Type: Bug
  Components: HFile
Reporter: Prabhu Joseph
Priority: Blocker

 After increasing the TTL value of a Hbase Table , table gets inaccessible. 
 Scan table not working.
 Scan in hbase shell throws
 java.lang.IllegalStateException: Block index not loaded
 at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV1.blockContainingKey(HFileReaderV1.java:181)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV1$AbstractScannerV1.seekTo(HFileReaderV1.java:426)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:131)
 at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2015)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3706)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1761)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1753)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1730)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2409)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
 STEPS to Reproduce:
  create 'debugger',{NAME = 'd',TTL = 15552000}
  put 'debugger','jdb','d:desc','Java debugger',1399699792000
  disable 'debugger'
 alter 'debugger',{NAME = 'd',TTL = 6912}
 enable 'debugger'
 scan 'debugger'
 Reason for the issue:
When inserting already expired data in debugger table, hbase creates a 
 hfile with empty data 
 block and index block. On scanning table, StoreFile.Reader checks whether the 
 TimeRangeTracker's maximum timestamp is greater than ttl value, so it skips 
 the empty file.
   But when ttl is changed, the maximum timestamp will be lesser than ttl 
 value, so StoreFile.Reader tries to read index block from HFile leading to 
 java.lang.IllegalStateException: Block index not loaded.
 SOLUTION:
 StoreFile.java 
boolean passesTimerangeFilter(Scan scan, long oldestUnexpiredTS) {
   if (timeRangeTracker == null) {
 return true;
   } else {
 return timeRangeTracker.includesTimeRange(scan.getTimeRange()) 
 timeRangeTracker.getMaximumTimestamp() = oldestUnexpiredTS;
   }
 }
 In the above method, by checking whether there are entries in the hfile by 
 using FixedFileTrailer
 block we can skip scanning the empty hfile.
 // changed code will solve the issue
  boolean passesTimerangeFilter(Scan scan, long oldestUnexpiredTS) {
   if (timeRangeTracker == null) {
 return true;
   } else {
 return timeRangeTracker.includesTimeRange(scan.getTimeRange()) 
 timeRangeTracker.getMaximumTimestamp() = oldestUnexpiredTS  
 reader.getEntries()0;
   }
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Jenkins log

2014-11-06 Thread Stack
Removed -Dsurefire.secondPartThreadCount=2 from the branch-1 builds. Now we
run thread count of 5, the default. Test completes 3 minutes faster (1hr
13min rather than 1hr 16min).  Will let it soak a while.

On precommits, set -Dsurefire.rerunFailingTestsCount=2 so we retry any test
that fails (if it completes, reported as flakey).  Lets see how this does a
while.

St.Ack

On Tue, Oct 28, 2014 at 2:31 PM, Stack st...@duboce.net wrote:

 Dima Spivak has been digging in on our build fails (with help from Andrew
 Bayer of Apache on the backend).

 I changed our build machines to be the below:

 (ubuntu||Hadoop)  !jenkins-cloud-4GB  !H11

 ... which is a pool of 16 nodes.

 From 'ubuntu' (9 nodes) or 'ubuntu||Hadoop' (19 possible nodes).

 Dima watching branch-1 has figured H11 is sick.  And with Andrew's help,
 they've figured that jenkins-cloud-4GB stuff ain't hefty enough to do out
 builds.

 St.Ack



 On Sun, Oct 26, 2014 at 8:52 PM, Stack st...@duboce.net wrote:

 I put the config

  -Dmaven.test.redirectTestOutputToFile=true

 back on the branch-1 builds and set log level back to WARN.

 It look like the -D param had no effect. Something to do w/ log amount.
 See HBASE-12285.

 St.Ack

 On Sun, Oct 26, 2014 at 4:21 PM, Stack st...@duboce.net wrote:

 I reenabled logging at DEBUG level on branch-1 builds up on jenkins
 (HBASE-12285).
 St.Ack

 On Sat, Oct 25, 2014 at 7:58 PM, Stack st...@duboce.net wrote:

 Lets keep a running notice of changes made up on jenkins here on this
 thread.

 I just changed branch-1 removing:

   -Dmaven.test.redirectTestOutputToFile=true

 Internal tests have been passing. They didn't have the above in place.
  branch-1 has been passing since we changed logging to WARN level only.
 I'll let the above config sit awhile and then put back branch-1 to DEBUG
 level.

 St.Ack







Not able to create a table in a single node set up with trunk

2014-11-06 Thread ramkrishna vasudevan
Hi

In the current trunk version, where we have the master acting itself as a
region server does not allow to assign any usertables in a single node
setup.

Is it by design now so that master table only hosts SYSTEM tables?

So a single RS node setup would mean that the master and RS should be on
different nodes only.

Regards
Ram


[jira] [Created] (HBASE-12444) Total number of requests overflow because it's int

2014-11-06 Thread zhaoyunjiong (JIRA)
zhaoyunjiong created HBASE-12444:


 Summary: Total number of requests overflow because it's int
 Key: HBASE-12444
 URL: https://issues.apache.org/jira/browse/HBASE-12444
 Project: HBase
  Issue Type: Bug
  Components: hbck, master, regionserver
Reporter: zhaoyunjiong
Priority: Minor


When running hbck, I noticed Number of requests was wrong:
Average load: 466.41237113402065
Number of requests: -1835941345
Number of regions: 45242
Number of regions in transition: 0

The root cause is it use int, and clearly it overflowed.
I'll update a patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)