[jira] [Created] (HBASE-14926) Hung ThriftServer; no timeout on read from client; if client crashes, worker thread gets stuck reading

2015-12-03 Thread stack (JIRA)
stack created HBASE-14926:
-

 Summary: Hung ThriftServer; no timeout on read from client; if 
client crashes, worker thread gets stuck reading
 Key: HBASE-14926
 URL: https://issues.apache.org/jira/browse/HBASE-14926
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: stack


Thrift server is hung. All worker threads are doing this:

{code}
"thrift-worker-0" daemon prio=10 tid=0x7f0bb95c2800 nid=0xf6a7 runnable 
[0x7f0b956e]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
- locked <0x00066d859490> (a java.io.BufferedInputStream)
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at 
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at 
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at 
org.apache.thrift.protocol.TCompactProtocol.readByte(TCompactProtocol.java:601)
at 
org.apache.thrift.protocol.TCompactProtocol.readMessageBegin(TCompactProtocol.java:470)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
at 
org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:289)
at org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

They never recover.

I don't have client side logs.

We've been here before: HBASE-4967 "connected client thrift sockets should have 
a server side read timeout" but this patch only got applied to fb branch (and 
thrift has changed since then).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


unsubscribe me please from this mailing list

2015-12-03 Thread Garg, Rinku
unsubscribe me please from this mailing list
Thanks & Regards

Rinku Garg

_
The information contained in this message is proprietary and/or confidential. 
If you are not the intended recipient, please: (i) delete the message and all 
copies; (ii) do not disclose, distribute or use the message in any manner; and 
(iii) notify the sender immediately. In addition, please be aware that any 
message addressed to our domain is subject to archiving and review by persons 
other than the intended recipient. Thank you.


unsubscribe me please from this mailing list

2015-12-03 Thread Personal
Please unsubscribe me 

Thanks
Karthik

Sent from my iPhone

> On Dec 3, 2015, at 8:34 PM, Garg, Rinku  wrote:
> 
> unsubscribe me please from this mailing list
> Thanks & Regards
> 
> Rinku Garg
> 
> _
> The information contained in this message is proprietary and/or confidential. 
> If you are not the intended recipient, please: (i) delete the message and all 
> copies; (ii) do not disclose, distribute or use the message in any manner; 
> and (iii) notify the sender immediately. In addition, please be aware that 
> any message addressed to our domain is subject to archiving and review by 
> persons other than the intended recipient. Thank you.


[jira] [Created] (HBASE-14923) VerifyReplication should not mask the exception during result comaprision

2015-12-03 Thread Vishal Khandelwal (JIRA)
Vishal Khandelwal created HBASE-14923:
-

 Summary: VerifyReplication should not mask the exception during 
result comaprision 
 Key: HBASE-14923
 URL: https://issues.apache.org/jira/browse/HBASE-14923
 Project: HBase
  Issue Type: Bug
  Components: tooling
Affects Versions: 0.98.16, 2.0.0
Reporter: Vishal Khandelwal
Assignee: Vishal Khandelwal
Priority: Minor
 Fix For: 2.0.0, 0.98.16


hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java

Line:154
 } catch (Exception e) {
logFailRowAndIncreaseCounter(context, 
Counters.CONTENT_DIFFERENT_ROWS, value);
  }

Just LOG.error needs to be added for more information for the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14918) In-Memory MemStore Flush and Compaction

2015-12-03 Thread Eshcar Hillel (JIRA)
Eshcar Hillel created HBASE-14918:
-

 Summary: In-Memory MemStore Flush and Compaction
 Key: HBASE-14918
 URL: https://issues.apache.org/jira/browse/HBASE-14918
 Project: HBase
  Issue Type: Umbrella
Affects Versions: 2.0.0
Reporter: Eshcar Hillel


A memstore serves as the in-memory component of a store unit, absorbing all 
updates to the store. From time to time these updates are flushed to a file on 
disk, where they are compacted (by eliminating redundancies) and compressed 
(i.e., written in a compressed format to reduce their storage size).

We aim to speed up data access, and therefore suggest to apply in-memory 
memstore flush. That is to flush the active in-memory segment into an 
intermediate buffer where it can be accessed by the application. Data in the 
buffer is subject to compaction and can be stored in any format that allows it 
to take up smaller space in RAM. The less space the buffer consumes the longer 
it can reside in memory before data is flushed to disk, resulting in better 
performance.
Specifically, the optimization is beneficial for workloads with medium-to-high 
key churn which incur many redundant cells, like persistent messaging. 

We suggest to structure the solution as 3 subtasks (respectively, patches). 
(1) Infrastructure - refactoring of the MemStore hierarchy, introducing segment 
(StoreSegment) as first-class citizen, and decoupling memstore scanner from the 
memstore implementation;
(2) Implementation of a new memstore (CompactingMemstore) with non-optimized 
immutable segment representation, and 
(3) Memory optimization including compressed format representation and offheap 
allocations.

This Jira continues the discussion in HBASE-13408.
Design documents, evaluation results and previous patches can be found in 
HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14921) Memory optimizations

2015-12-03 Thread Eshcar Hillel (JIRA)
Eshcar Hillel created HBASE-14921:
-

 Summary: Memory optimizations
 Key: HBASE-14921
 URL: https://issues.apache.org/jira/browse/HBASE-14921
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 2.0.0
Reporter: Eshcar Hillel


Memory optimizations including compressed format representation and offheap 
allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14919) Infrastructure refactoring

2015-12-03 Thread Eshcar Hillel (JIRA)
Eshcar Hillel created HBASE-14919:
-

 Summary: Infrastructure refactoring
 Key: HBASE-14919
 URL: https://issues.apache.org/jira/browse/HBASE-14919
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 2.0.0
Reporter: Eshcar Hillel
Assignee: Eshcar Hillel


Refactoring the MemStore hierarchy, introducing segment (StoreSegment) as 
first-class citizen and decoupling memstore scanner from the memstore 
implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14920) Compacting Memstore

2015-12-03 Thread Eshcar Hillel (JIRA)
Eshcar Hillel created HBASE-14920:
-

 Summary: Compacting Memstore
 Key: HBASE-14920
 URL: https://issues.apache.org/jira/browse/HBASE-14920
 Project: HBase
  Issue Type: Sub-task
Reporter: Eshcar Hillel
Assignee: Eshcar Hillel


Implementation of a new compacting memstore with non-optimized immutable 
segment representation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14922) Delayed flush doesn't work causing flush storms.

2015-12-03 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14922:
-

 Summary: Delayed flush doesn't work causing flush storms.
 Key: HBASE-14922
 URL: https://issues.apache.org/jira/browse/HBASE-14922
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark


Starting all regionservers at the same time will mean that most 
PeriodicMemstoreFlusher's will be running at the same time. So all of these 
threads will queue flushes at about the same time.

This was supposed to be mitigated by Delayed. However that isn't used at all. 
This results in the immediate filling up and then draining of the flush queues 
every hour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14924) Slow response from HBASE REStful interface

2015-12-03 Thread Moulay Amine Jaidi (JIRA)
Moulay Amine Jaidi created HBASE-14924:
--

 Summary: Slow response from HBASE REStful interface
 Key: HBASE-14924
 URL: https://issues.apache.org/jira/browse/HBASE-14924
 Project: HBase
  Issue Type: Brainstorming
  Components: REST
Affects Versions: 1.1.1
 Environment: IBM Biginsights 4.1
Reporter: Moulay Amine Jaidi
Priority: Blocker



We are currently experiencing an issue with HBase through the REST interface. 
Previously we were on version 0.96 and were ables to run the following REST 
command successfully and very quickly

http://10.92.211.22:60800/tableName/RAWKEY.*

At the moment after doing an upgrade to 1.1.1 this request takes a lot longer 
to retirive results (count is 12 items to return)

Are there any configurations or known issues that may affect this




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Moving the HBase site to use a stand-alone git repo

2015-12-03 Thread Stack
On Wed, Dec 2, 2015 at 9:21 PM, Misty Stanley-Jones <
mstanleyjo...@cloudera.com> wrote:

> The Jenkins job would checkout the main repo's master branch, run 'mvn
> clean site site:stage' to generate the site, docs, and APIdocs. Then it
> would checkout the asf-site branch of the hbase-site repo (or whetever it
> is called) and commit the newly-generated target/stage/* to it and push.
> Does that make sense?
>
>
Yes.
St.Ack



> On Thu, Dec 3, 2015 at 6:33 AM, Stack  wrote:
>
> > Good by me. Interested in the answers to Nicks questions too.
> > St.Ack
> >
> > On Wed, Dec 2, 2015 at 11:07 AM, Nick Dimiduk 
> wrote:
> >
> > > +1 in theory. How will this work with integration of javadoc into the
> > site?
> > > How will RM's manage integrating site docs into their releases?
> > >
> > > On Wed, Dec 2, 2015 at 8:56 AM, Sean Busbey 
> wrote:
> > >
> > > > Hi Folks!
> > > >
> > > > You may recall the occasional emails dev@ gets from a Jenkins job
> > Misty
> > > > set
> > > > up to make updating the website easier for us. They're titled "HBase
> > > > Generate Website" and they give a series of steps any committer can
> run
> > > to
> > > > push the changes live.
> > > >
> > > > Misty has been investigating automating this entirely[1], so that
> once
> > > > updates land in the master source branch the website just updates.
> IMO,
> > > > this would go a long way to improving how consistently updates make
> it
> > to
> > > > our primary public-facing presence.
> > > >
> > > > During our conversation with INFRA (on the jira[1] and in a
> > infra@apache
> > > > thread), the consensus seems to be that having an automated non-human
> > > > process push to a repo that doesn't contain source that might lead
> to a
> > > > release is acceptable. In contrast, such non-human pushing to our
> main
> > > repo
> > > > (even if just to the asf-site branch) is seen as higher risk that
> would
> > > > require a policy decision.
> > > >
> > > > Is everyone (especially PMCs) fine with us moving our site to a
> > different
> > > > repository?
> > > >
> > > > Presumably something like hbase-site. The expectation is that in
> almost
> > > all
> > > > cases folks won't need to checkout or track this remote since the
> > > automated
> > > > job will be pushing rendered updates for us.
> > > >
> > > >
> > > > [1]: https://issues.apache.org/jira/browse/INFRA-10722
> > > >
> > > > --
> > > > Sean
> > > >
> > >
> >
>


Re: Would ROWCOL Bloom filter help in Scan

2015-12-03 Thread Stack
On Wed, Dec 2, 2015 at 10:01 PM, Jerry He  wrote:

> Thanks for the response.  You got my question correctly.
> If we are scanning the rows one by one and we have the requested column in
> the column tracker, we have the row+column to look up in the bloom filter,
> don't we? We may not be able to filter out the file scanners upfront. But
> may at the later time and lower level to skip something?
>
>
You are right. If more than one explicit
column specified, we could do a bloom check for the second and so on since
we'd have the current row to hand. It could make for a nice speedup for
scans of many explicit columns traversing a dataset that is sparsely
populated..

St.Ack



> Jerry
>
> On Mon, Nov 30, 2015 at 10:55 PM, Stack  wrote:
>
> > On Mon, Nov 30, 2015 at 9:56 AM, Jerry He  wrote:
> >
> > > Hi, experts
> > >
> > > HBASE supports ROWCOL bloom filter. ROW+COL would be the bloom key.
> > > In most of the documentations, it says only GET would benefit. For
> > > multi-column as well.
> > >
> > > If I do scan with StartRow and EndRow, and also specify columns.
> > > Would ROWCOL bloom filter provide any benefit in anyway?
> > >
> > >
> > If I understand your question properly, the answer is no. While we might
> > have a set of columns to check in the bloom, we'd not know the set of
> rows
> > between start and end row and so would not be able to formulate a query
> > against the ROW+COL bloom filter.
> >
> > St.Ack
> >
> >
> >
> > > Thank you.
> > >
> > > Jerry
> > >
> >
>


Re: -1 overall but +1 in individual tests

2015-12-03 Thread Stack
Sorry Appy. Probably my fault or at least my hackery is making it harder to
figure what is going on at the moment. I've been messing with test-patch to
try and get better reporting on zombies. I committed changes yesterday.
See HBASE-14772. I say there "Here is some more cleanup in zombie
detector... Better integration with the test-patch.sh. Pushed to master.
Could break build. Will keep an eye out." I should have added note here in
dev that I was messing and that hadoopqa reporting could be wonky (I
suppose i didn't think anyone would notice (smile)).

So, it seems like a clean build is possible post-my-messings:
https://builds.apache.org/view/H-L/view/HBase/job/PreCommit-HBASE-Build/16755/console

But my changes are suppresssing reporting on all but zombies output. Let me
fix up this and then lets come back to getting reporting on this issue.

St.Ack

On Wed, Dec 2, 2015 at 7:18 PM, Apekshit Sharma  wrote:

> Hi
> Any ideas what can be the reason here?
>
> https://issues.apache.org/jira/browse/HBASE-14865?focusedCommentId=15037056=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15037056
>
> -- Appy
>


Re: Build failed in Jenkins: HBase-Trunk_matrix » latest1.8,Hadoop #527

2015-12-03 Thread Stack
This test usually passes:
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Trunk_matrix/527/jdk=latest1.8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.replication.regionserver/TestReplicationThrottler/testThrottling/history/

Will wait till it falis more

St.Ack


On Thu, Dec 3, 2015 at 2:16 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://builds.apache.org/job/HBase-Trunk_matrix/jdk=latest1.8,label=Hadoop/527/changes
> >
>
> Changes:
>
> [stack]  HBASE-14772 Improve zombie detector; be more discerning; part2;
>
> --
> [...truncated 4697 lines...]
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.889 sec
> - in org.apache.hadoop.hbase.ipc.TestBufferChain
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestIPC
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.749 sec
> - in org.apache.hadoop.hbase.ipc.TestIPC
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestRpcHandlerException
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 sec
> - in org.apache.hadoop.hbase.ipc.TestRpcHandlerException
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.941 sec
> - in org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestCallRunner
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.382 sec
> - in org.apache.hadoop.hbase.ipc.TestCallRunner
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestGlobalEventLoopGroup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.377 sec
> - in org.apache.hadoop.hbase.ipc.TestGlobalEventLoopGroup
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestAsyncIPC
> Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.386
> sec - in org.apache.hadoop.hbase.ipc.TestAsyncIPC
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.ipc.TestRpcMetrics
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.225 sec
> - in org.apache.hadoop.hbase.ipc.TestRpcMetrics
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.TestTableDescriptor
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.308 sec
> - in org.apache.hadoop.hbase.TestTableDescriptor
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.zookeeper.TestZKConfig
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.704 sec
> - in org.apache.hadoop.hbase.zookeeper.TestZKConfig
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=256m; support was removed in 8.0
> Running org.apache.hadoop.hbase.zookeeper.TestZooKeeperMainServer
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.162 sec
> - in 

Re: Testing and CI -- Apache Jenkins Builds (WAS -> Re: Testing)

2015-12-03 Thread Stack
Notice: I'm messing with test-patch.sh reporting trying to improve the
zombie section. I'll likely break things for a while (I already have -- the
hadoopqa report section is curtailed at mo). Will flag when done.
St.Ack

On Wed, Dec 2, 2015 at 1:22 PM, Stack  wrote:

> As part of my continuing advocacy of builds.apache.org and that their
> results are now worthy of our trust and nurture, here are some highlights
> from the last few days of builds:
>
> + hadoopqa is now finding zombies before the patch is committed.
> HBASE-14888 showed "-1 core tests. The patch failed these unit tests:" but
> didn't have any failed tests listed (I'm trying to see if I can do anything
> about this...). Running our little ./dev-tools/findHangingTests.py against
> the consoleText, it showed a hanging test. Running locally, I see same
> hang. This is before the patch landed.
> + Our branch runs are now near totally zombie and flakey free -- still
> some work to do -- but a recent patch that seemed harmless was causing a
> reliable flake fail in the backport to branch-1* confirmed by local runs.
> The flakeyness was plain to see up in builds.apache.org.
> + In the last few days I've committed a patch that included javadoc
> warnings even though hadoopqa said the patch introduced javadoc issues (I
> missed it). This messed up life for folks subsequently as their patches now
> reported javadoc issues
>
> In short, I suggest that builds.apache.org is worth keeping an eye on,
> make sure you get a clean build out of hadoopqa before committing anything,
> and lets all work together to try and keep our builds blue: it'll save us
> all work in the long run.
>
> St.Ack
>
>
> On Tue, Nov 4, 2014 at 9:38 AM, Stack  wrote:
>
>> Branch-1 and master have stabilized and now run mostly blue (give or take
>> the odd failure) [1][2]. Having a mostly blue branch-1 has helped us
>> identify at least one destabilizing commit in the last few days, maybe two;
>> this is as it should be (smile).
>>
>> Lets keep our builds blue. If you commit a patch, make sure subsequent
>> builds stay blue. You can subscribe to bui...@hbase.apache.org to get
>> notice of failures if not already subscribed.
>>
>> Thanks,
>> St.Ack
>>
>> 1. https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.0/
>> 2. https://builds.apache.org/view/H-L/view/HBase/job/HBase-TRUNK/
>>
>>
>> On Mon, Oct 13, 2014 at 4:41 PM, Stack  wrote:
>>
>>> A few notes on testing.
>>>
>>> Too long to read, infra is more capable now and after some work, we are
>>> seeing branch-1 and trunk mostly running blue. Lets try and keep it this
>>> way going forward.
>>>
>>> Apache Infra has new, more capable hardware.
>>>
>>> A recent spurt of test fixing combined with more capable hardware seems
>>> to have gotten us to a new place; tests are mostly passing now on branch-1
>>> and master.  Lets try and keep it this way and start to trust our test runs
>>> again.  Just a few flakies remain.  Lets try and nail them.
>>>
>>> Our tests now run in parallel with other test suites where previous we
>>> ran alone. You can see this sometimes when our zombie detector reports
>>> tests from another project altogether as lingerers (To be fixed).  Some of
>>> our tests are failing because a concurrent hbase run is undoing classes and
>>> data from under it. Also, lets fix.
>>>
>>> Our tests are brittle. It takes 75minutes for them to complete.  Many
>>> are heavy-duty integration tests starting up multiple clusters and
>>> mapreduce all in the one JVM. It is a miracle they pass at all.  Usually
>>> integration tests have been cast as unit tests because there was no where
>>> else for them to get an airing.  We have the hbase-it suite now which would
>>> be a more apt place but until these are run on a regular basis in public
>>> for all to see, the fat integration tests disguised as unit tests will
>>> remain.  A review of our current unit tests weeding the old cruft and the
>>> no longer relevant or duplicates would be a nice undertaking if someone is
>>> looking to contribute.
>>>
>>> Alex Newman has been working on making our tests work up on travis and
>>> circle-ci.  That'll be sweet when it goes end-to-end.  He also added in
>>> some "type" categorizations -- client, filter, mapreduce -- alongside our
>>> old "sizing" categorizations of small/medium/large.  His thinking is that
>>> we can run these categorizations in parallel so we could run the total
>>> suite in about the time of the longest test, say 20-30minutes?  We could
>>> even change Apache to run them this way.
>>>
>>> FYI,
>>> St.Ack
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>


Successful: HBase Generate Website

2015-12-03 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. If failed, skip to the 
bottom of this email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site:

  wget -O- 
https://builds.apache.org/job/hbase_generate_website/52/artifact/website.patch.zip
 | funzip > 69658ea4a916c8ea5e6dd7d056a548e8dce4e96d.patch
  git fetch
  git checkout -b asf-site-69658ea4a916c8ea5e6dd7d056a548e8dce4e96d 
origin/asf-site
  git am 69658ea4a916c8ea5e6dd7d056a548e8dce4e96d.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-69658ea4a916c8ea5e6dd7d056a548e8dce4e96d branch, and you can review 
the differences by running:

  git diff origin/asf-site

When you are satisfied, publish your changes to origin/asf-site using this 
command:

  git push origin asf-site-69658ea4a916c8ea5e6dd7d056a548e8dce4e96d:asf-site

Changes take a couple of minutes to be propagated. You can then remove your 
asf-site-69658ea4a916c8ea5e6dd7d056a548e8dce4e96d branch:

  git checkout master && git branch -d 
asf-site-69658ea4a916c8ea5e6dd7d056a548e8dce4e96d



If failed, see https://builds.apache.org/job/hbase_generate_website/52/console

Re: [DISCUSS] Moving the HBase site to use a stand-alone git repo

2015-12-03 Thread Nick Dimiduk
+1

On Wed, Dec 2, 2015 at 9:21 PM, Misty Stanley-Jones <
mstanleyjo...@cloudera.com> wrote:

> The Jenkins job would checkout the main repo's master branch, run 'mvn
> clean site site:stage' to generate the site, docs, and APIdocs. Then it
> would checkout the asf-site branch of the hbase-site repo (or whetever it
> is called) and commit the newly-generated target/stage/* to it and push.
> Does that make sense?
>
> On Thu, Dec 3, 2015 at 6:33 AM, Stack  wrote:
>
> > Good by me. Interested in the answers to Nicks questions too.
> > St.Ack
> >
> > On Wed, Dec 2, 2015 at 11:07 AM, Nick Dimiduk 
> wrote:
> >
> > > +1 in theory. How will this work with integration of javadoc into the
> > site?
> > > How will RM's manage integrating site docs into their releases?
> > >
> > > On Wed, Dec 2, 2015 at 8:56 AM, Sean Busbey 
> wrote:
> > >
> > > > Hi Folks!
> > > >
> > > > You may recall the occasional emails dev@ gets from a Jenkins job
> > Misty
> > > > set
> > > > up to make updating the website easier for us. They're titled "HBase
> > > > Generate Website" and they give a series of steps any committer can
> run
> > > to
> > > > push the changes live.
> > > >
> > > > Misty has been investigating automating this entirely[1], so that
> once
> > > > updates land in the master source branch the website just updates.
> IMO,
> > > > this would go a long way to improving how consistently updates make
> it
> > to
> > > > our primary public-facing presence.
> > > >
> > > > During our conversation with INFRA (on the jira[1] and in a
> > infra@apache
> > > > thread), the consensus seems to be that having an automated non-human
> > > > process push to a repo that doesn't contain source that might lead
> to a
> > > > release is acceptable. In contrast, such non-human pushing to our
> main
> > > repo
> > > > (even if just to the asf-site branch) is seen as higher risk that
> would
> > > > require a policy decision.
> > > >
> > > > Is everyone (especially PMCs) fine with us moving our site to a
> > different
> > > > repository?
> > > >
> > > > Presumably something like hbase-site. The expectation is that in
> almost
> > > all
> > > > cases folks won't need to checkout or track this remote since the
> > > automated
> > > > job will be pushing rendered updates for us.
> > > >
> > > >
> > > > [1]: https://issues.apache.org/jira/browse/INFRA-10722
> > > >
> > > > --
> > > > Sean
> > > >
> > >
> >
>


[jira] [Resolved] (HBASE-14924) Slow response from HBASE REStful interface

2015-12-03 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14924.

Resolution: Invalid

This is the project development tracker.

For user help and troubleshooting advice, please write to 
u...@hbase.apache.org. 


> Slow response from HBASE REStful interface
> --
>
> Key: HBASE-14924
> URL: https://issues.apache.org/jira/browse/HBASE-14924
> Project: HBase
>  Issue Type: Brainstorming
>  Components: REST
>Affects Versions: 1.1.1
> Environment: IBM Biginsights 4.1
>Reporter: Moulay Amine Jaidi
>Priority: Blocker
>  Labels: REST, hbase-rest, slow-scan
>
> We are currently experiencing an issue with HBase through the REST interface. 
> Previously we were on version 0.96 and were ables to run the following REST 
> command successfully and very quickly
> http://10.92.211.22:60800/tableName/RAWKEY.*
> At the moment after doing an upgrade to 1.1.1 this request takes a lot longer 
> to retirive results (count is 12 items to return)
> Are there any configurations or known issues that may affect this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Fwd: [NOTICE] people.apache.org web space is moving to home.apache.org

2015-12-03 Thread Andrew Purtell
Please note that the infrastructure team is making a significant change.
people.apache.org will be going away to be replaced with home.apache.org,
but only for hosting public web content, and only accessible (by
committers/members) via sftp.

Some of us, like myself, have been hosting release candidate binaries on
people.apache.org. Any of us doing that will need to switch to publishing
release candidates on dist.apache.org instead.

We have also in the past used people.apache.org to host temporary maven
repositories. I checked root poms for our active branches. Only 0.94 will
be affected when people.apache.org goes away. That may produce build
failure so if we will make another 0.94 release we should include a fix for
this.


-- Forwarded message --
From: Daniel Gruno 
Date: Wed, Nov 25, 2015 at 4:20 AM
Subject: [NOTICE] people.apache.org web space is moving to home.apache.org
To: committ...@apache.org


Hi folks,
as the subject says, people.apache.org is being decommissioned soon, and
personal web space is being moved to a new home, aptly named
home.apache.org ( https://home.apache.org/ )

IMPORTANT:
If you have things on people.apache.org that you would like to retain,
please make a copy of it and move it to home.apache.org. (note, you will
have to make a folder called 'public_html' there, for items to show up
under https://home.apache.org/~yourID/ ).

We will _NOT_ be moving your data for you. There is simply too much old
junk data on minotaur (the current people.apache.org machine) for it to
make sense to rsync it across, so we have made the decision that moving
data is up to each individual committer.

The new host, home.apache.org, will ONLY be for web space, you will not
have shell access to the machine (but you can copy data to it using SFTP
and your SSH key). Access to modify LDAP records (for project chairs)
will be moved to a separate host when the time comes.

There will be a 3 month grace period to move your data across. After
this time span (March 1st, 2016), minotaur will no longer serve up
personal web space, and visits to people.apache.org will be redirected
to home.apache.org.

With regards,
Daniel on behalf of the Apache Infrastructure Team.

PS: All replies to this should go to infrastruct...@apache.org



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: Would ROWCOL Bloom filter help in Scan

2015-12-03 Thread Stack
On Thu, Dec 3, 2015 at 12:54 PM, Jerry He  wrote:

> Thanks. Stack.
> I will look into the code more as well.
> Do you think Column only Bloom Filter will help more with this SCAN +
> explicit columns case and with space saving?
>
>
Come again Jerry. Column-only? (It has to have a row on it, right?). And
how do we get space savings?

There is a bloom at the start of every row already, to speed deletes. IIRC,
we always read this first before we do anything. Perhaps we could beef it
up with more than just delete?

St.Ack



> Jerry
>
> On Thu, Dec 3, 2015 at 9:01 AM, Stack  wrote:
>
> > On Wed, Dec 2, 2015 at 10:01 PM, Jerry He  wrote:
> >
> > > Thanks for the response.  You got my question correctly.
> > > If we are scanning the rows one by one and we have the requested column
> > in
> > > the column tracker, we have the row+column to look up in the bloom
> > filter,
> > > don't we? We may not be able to filter out the file scanners upfront.
> But
> > > may at the later time and lower level to skip something?
> > >
> > >
> > You are right. If more than one explicit
> > column specified, we could do a bloom check for the second and so on
> since
> > we'd have the current row to hand. It could make for a nice speedup for
> > scans of many explicit columns traversing a dataset that is sparsely
> > populated..
> >
> > St.Ack
> >
> >
> >
> > > Jerry
> > >
> > > On Mon, Nov 30, 2015 at 10:55 PM, Stack  wrote:
> > >
> > > > On Mon, Nov 30, 2015 at 9:56 AM, Jerry He 
> wrote:
> > > >
> > > > > Hi, experts
> > > > >
> > > > > HBASE supports ROWCOL bloom filter. ROW+COL would be the bloom key.
> > > > > In most of the documentations, it says only GET would benefit. For
> > > > > multi-column as well.
> > > > >
> > > > > If I do scan with StartRow and EndRow, and also specify columns.
> > > > > Would ROWCOL bloom filter provide any benefit in anyway?
> > > > >
> > > > >
> > > > If I understand your question properly, the answer is no. While we
> > might
> > > > have a set of columns to check in the bloom, we'd not know the set of
> > > rows
> > > > between start and end row and so would not be able to formulate a
> query
> > > > against the ROW+COL bloom filter.
> > > >
> > > > St.Ack
> > > >
> > > >
> > > >
> > > > > Thank you.
> > > > >
> > > > > Jerry
> > > > >
> > > >
> > >
> >
>


Re: Would ROWCOL Bloom filter help in Scan

2015-12-03 Thread Jerry He
Thanks. Stack.
I will look into the code more as well.
Do you think Column only Bloom Filter will help more with this SCAN +
explicit columns case and with space saving?

Jerry

On Thu, Dec 3, 2015 at 9:01 AM, Stack  wrote:

> On Wed, Dec 2, 2015 at 10:01 PM, Jerry He  wrote:
>
> > Thanks for the response.  You got my question correctly.
> > If we are scanning the rows one by one and we have the requested column
> in
> > the column tracker, we have the row+column to look up in the bloom
> filter,
> > don't we? We may not be able to filter out the file scanners upfront. But
> > may at the later time and lower level to skip something?
> >
> >
> You are right. If more than one explicit
> column specified, we could do a bloom check for the second and so on since
> we'd have the current row to hand. It could make for a nice speedup for
> scans of many explicit columns traversing a dataset that is sparsely
> populated..
>
> St.Ack
>
>
>
> > Jerry
> >
> > On Mon, Nov 30, 2015 at 10:55 PM, Stack  wrote:
> >
> > > On Mon, Nov 30, 2015 at 9:56 AM, Jerry He  wrote:
> > >
> > > > Hi, experts
> > > >
> > > > HBASE supports ROWCOL bloom filter. ROW+COL would be the bloom key.
> > > > In most of the documentations, it says only GET would benefit. For
> > > > multi-column as well.
> > > >
> > > > If I do scan with StartRow and EndRow, and also specify columns.
> > > > Would ROWCOL bloom filter provide any benefit in anyway?
> > > >
> > > >
> > > If I understand your question properly, the answer is no. While we
> might
> > > have a set of columns to check in the bloom, we'd not know the set of
> > rows
> > > between start and end row and so would not be able to formulate a query
> > > against the ROW+COL bloom filter.
> > >
> > > St.Ack
> > >
> > >
> > >
> > > > Thank you.
> > > >
> > > > Jerry
> > > >
> > >
> >
>


[jira] [Created] (HBASE-14925) Develop HBase shell command/tool to list table's region info through command line

2015-12-03 Thread Romil Choksi (JIRA)
Romil Choksi created HBASE-14925:


 Summary: Develop HBase shell command/tool to list table's region 
info through command line
 Key: HBASE-14925
 URL: https://issues.apache.org/jira/browse/HBASE-14925
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Romil Choksi


I am going through the hbase shell commands to see if there is anything I can 
use to get all the regions info just for a particular table. I don’t see any 
such command that provides me that information.
It would be better to have a command that provides region info, start key, end 
key etc taking a table name as the input parameter. This is available through 
HBase UI on clicking on a particular table's link

A tool/shell command to get a list of regions for a table or all tables in a 
tabular structured output (that is machine readable)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: Build failed in Jenkins: HBase-Trunk_matrix » latest1.8,Hadoop #513

2015-12-03 Thread Du, Jingcheng
Hi, this issue is filed as HBASE-14907. Now the patch is available.
It tried to ask for the HTableDescriptor after the table directory was removed 
(at that time this HTableDescriptor was not cached in memory). In the patch, it 
directly checks if the mob directory is there in fs instead of asking for 
HTableDescriptor.

Regards,
Jingcheng

-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: Tuesday, December 1, 2015 1:10 AM
To: HBase Dev List
Cc: bui...@hbase.apache.org
Subject: Re: Build failed in Jenkins: HBase-Trunk_matrix » latest1.8,Hadoop #513

It looks like the latch is not yet set. Can we keep going if no latch to use 
rather than NPE?
St.Ack

On Mon, Nov 30, 2015 at 9:08 AM, Stack  wrote:

> If you look at the logs for the above fail run, does anything pop out?
> Thanks.
> St.Ack
>
> On Sun, Nov 29, 2015 at 11:33 PM, ramkrishna vasudevan < 
> ramkrishna.s.vasude...@gmail.com> wrote:
>
>> Not able to reproduce the failures. Trying.
>>
>> On Mon, Nov 30, 2015 at 12:26 PM, ramkrishna vasudevan < 
>> ramkrishna.s.vasude...@gmail.com> wrote:
>>
>> > Sure will take a look at this. There are other NPEs also
>> >
>> > java.lang.NullPointerException
>> >   at
>> org.apache.hadoop.hbase.mob.MobUtils.hasMobColumns(MobUtils.java:851)
>> >   at
>> org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.deleteF
>> romFs(DeleteTableProcedure.java:350)
>> >   at
>> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.rollbac
>> kState(CreateTableProcedure.java:167)
>> >   at
>> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.rollbac
>> kState(CreateTableProcedure.java:57)
>> >   at
>> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.rollback(Sta
>> teMachineProcedure.java:134)
>> >   at
>> org.apache.hadoop.hbase.procedure2.Procedure.doRollback(Procedure.jav
>> a:467)
>> >   at
>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(
>> ProcedureExecut
>> >
>> >
>> > Regards
>> >
>> > Ram
>> >
>> >
>> > On Mon, Nov 30, 2015 at 12:22 PM, Stack  wrote:
>> >
>> >> Seems like a basic fail NPE. See below. Test came in here:
>> >>
>> >> commit ccb22bd80dfae64ff27f660254afb224dce268f0
>> >> Author: ramkrishna 
>> >> Date:   Tue Jul 21 21:15:32 2015 +0530
>> >>
>> >> HBASE-12295 Prevent block eviction under us if reads are in
>> progress
>> >> from
>> >> the BBs (Ram)
>> >>
>> >> Maybe have a look Ram?
>> >>
>> >> Thanks,
>> >> St.Ack
>> >>
>> >>
>> >> 2015-11-30 04:23:40,444 ERROR
>> >> [B.defaultRpcServer.handler=9,queue=1,port=57371]
>> >> coprocessor.CoprocessorHost(517): The coprocessor
>> >>
>> >>
>> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient$CustomInne
>> rRegionObserverWrapper
>> >> threw java.lang.NullPointerException 
>> >> java.lang.NullPointerException
>> >> at
>> >>
>> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient$CustomInne
>> rRegionObserver.slowdownCode(TestBlockEvictionFromClient.java:1423)
>> >> at
>> >>
>> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient$CustomInne
>> rRegionObserver.postScannerNext(TestBlockEvictionFromClient.java:1398
>> )
>> >> at
>> >>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(Re
>> gionCoprocessorHost.java:1349)
>> >> at
>> >>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOper
>> ation.call(RegionCoprocessorHost.java:1645)
>> >> at
>> >>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperat
>> ion(RegionCoprocessorHost.java:1721)
>> >> at
>> >>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperat
>> ionWithResult(RegionCoprocessorHost.java:1684)
>> >> at
>> >>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScanne
>> rNext(RegionCoprocessorHost.java:1344)
>> >> at
>> >>
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices
>> .java:2624)
>> >> at
>> >>
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService
>> $2.callBlockingMethod(ClientProtos.java:33426)
>> >> at
>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2184)
>> >> at
>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>> >> at
>> >>
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java
>> :133)
>> >> at
>> >> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>> >> at java.lang.Thread.run(Thread.java:745)
>> >>
>> >>
>> >>
>> >>
>> >> On Sun, Nov 29, 2015 at 9:05 PM, Apache Jenkins Server < 
>> >> jenk...@builds.apache.org> wrote:
>> >>
>> >> > See <
>> >> >
>> >>
>> https://builds.apache.org/job/HBase-Trunk_matrix/jdk=latest1.8,label=
>> Hadoop/513/
>> >> > >
>> >> >
>> >> > --
>> >> > [...truncated 6248 lines...]