[jira] [Created] (HBASE-19494) Create simple WALKey filter that can be plugged in on the Replication Sink

2017-12-11 Thread stack (JIRA)
stack created HBASE-19494:
-

 Summary: Create simple WALKey filter that can be plugged in on the 
Replication Sink
 Key: HBASE-19494
 URL: https://issues.apache.org/jira/browse/HBASE-19494
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: stack
Assignee: stack
 Fix For: 2.0.0-beta-1


hbase-indexer used to look at WALKeys on the sink to see if their time of 
creation was before the time at which the replication stream was enabled.

In the parent redo, there is no means for doing this anymore (because WALKey 
used to be Private and because to get at the WALKey in the Sink, you had to 
override all of the Replication which meant importing a million Private 
objects...).

This issue is about adding a simple filter to Replication on the sink-side that 
just takes a WALKey (now InterfaceAudience LimitedPrivate and recently made 
read-only).

Assigned myself. Need to do this so hbase-indexer can move to hbase2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19493) Make TestWALMonotonicallyIncreasingSeqId also work with AsyncFSWAL

2017-12-11 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19493:
-

 Summary: Make TestWALMonotonicallyIncreasingSeqId also work with 
AsyncFSWAL
 Key: HBASE-19493
 URL: https://issues.apache.org/jira/browse/HBASE-19493
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.0.0-beta-1


Now it will cast WAL to FSHLog so if we make AsyncFSWAL default then it will 
fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19492) Add EXCLUDE_NAMESPACE and EXCLUDE_TABLECFS support to replication peer config

2017-12-11 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-19492:
--

 Summary: Add EXCLUDE_NAMESPACE and EXCLUDE_TABLECFS support to 
replication peer config
 Key: HBASE-19492
 URL: https://issues.apache.org/jira/browse/HBASE-19492
 Project: HBase
  Issue Type: Improvement
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang


This is a follow-up issue after HBASE-16868. Copied the comments in HBASE-16868.

This replicate_all flag is useful to avoid misuse of replication peer config. 
And on our cluster we have more config: EXCLUDE_NAMESPACE and EXCLUDE_TABLECFS 
for replication peer. Let me tell more about our use case. We have two online 
serve cluster and one offline cluster for MR/Spark job. For online cluster, all 
tables will replicate to each other. And not all tables will replicate to 
offline cluster, because not all tables need OLAP job. We have hundreds of 
tables and if only one table don't need replicate to offline cluster, then you 
will config a lot of tables in replication peer config. So we add a new config 
option is EXCLUDE_TABLECFS. Then you only need config one table (which don't 
need replicate) in EXCLUDE_TABLECFS.

Then when the replicate_all flag is false, you can config NAMESPACE or TABLECFS 
means which namespace/tables need replicate to peer cluster. When replicate_all 
flag is true, you can config EXCLUDE_NAMESPACE or EXCLUDE_TABLECFS means which 
namespace/tables can't replicate to peer cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Suggestion to speed up precommit - Reduce versions in Hadoop check

2017-12-11 Thread Apekshit Sharma
https://issues.apache.org/jira/browse/HBASE-19489

On Mon, Dec 11, 2017 at 4:30 PM, Josh Elser  wrote:

> +1
>
>
> On 12/11/17 7:11 PM, Apekshit Sharma wrote:
>
>> Oh, btw, here's the little piece of code if anyone want's to analyze more.
>>
>> Script to collect precommit runs' console text.
>>
>> #!/bin/bash
>>
>> for i in `seq 10100 10300`; do
>>wget -a log -O ${i}
>> https://builds.apache.org/job/PreCommit-HBASE-Build/${i}/consoleText
>> done
>>
>> Number of failed runs:
>> grep "|  -1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
>> print i;}' | wc -l
>>
>> Number of passed runs:
>> grep "|  +1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
>> print i;}' | wc -l
>>
>> -- Appy
>>
>>
>> On Mon, Dec 11, 2017 at 4:07 PM, Apekshit Sharma 
>> wrote:
>>
>> Hi
>>>
>>> +1 hadoopcheck 52m 1s Patch does not cause any errors with Hadoop 2.6.1
>>> 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4.
>>>
>>> Almost 1 hr to check against 10 versions. And it's only going to increase
>>> as more 2.6.x, 2.7.x and 3.0.x releases come out.
>>>
>>> Suggestion here is simple, let's check against only the latest
>>> maintenance
>>> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
>>> Advantage: Save ~40 min on pre-commit time.
>>>
>>> Justification:
>>> - We only do compile checks. Maintenance releases are not supposed to be
>>> doing API breaking changes. So checking against maintenance release for
>>> each minor version should be enough.
>>> - We rarely see any hadoop check -1, and most recent ones have been due
>>> to
>>> 3.0. These will still be caught.
>>> - Nightly can still check against all hadoop versions (since nightlies
>>> are
>>> supposed to do holistic testing)
>>> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) [10 days]:
>>>138 had +1 hadoopcheck
>>> 15 had -1 hadoopcheck
>>>(others probably failed even before that - merge issue, etc)
>>>
>>>
>>> Spot checking some failures:[10241,10246,10225,
>>> 10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230]
>>>
>>> 10241: All 2.6.x failed. Others didn't run
>>> 10246: All 10 versions failed.
>>> 10184: All 2.6.x and 2.7.x failed. Others didn't run
>>> 10223: All 10 versions failed
>>> 10230: All 2.6.x failed. Others didn't run
>>>
>>> Common pattern being, all maintenance versions fail together.
>>> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's
>>> irrelevant to this discussion).
>>>
>>> What do you say - only check latest maintenance releases in precommit
>>> (and
>>> let nightlies do holistic testing against all versions)?
>>>
>>> -- Appy
>>>
>>>
>>
>>
>>


-- 

-- Appy


[jira] [Created] (HBASE-19491) Exclude flaky tests from nightly master run

2017-12-11 Thread Appy (JIRA)
Appy created HBASE-19491:


 Summary: Exclude flaky tests from nightly master run
 Key: HBASE-19491
 URL: https://issues.apache.org/jira/browse/HBASE-19491
 Project: HBase
  Issue Type: Improvement
Reporter: Appy
Assignee: Appy


I was of the opinion that nightly master runs were excluding flaky tests after 
seeing 
https://github.com/apache/hbase/blob/856ee283faf003404e8925006ce0e591c4eba600/dev-support/Jenkinsfile#L54
 few days ago.
After looking at our new set of scripts again and better understanding them, 
looks like that's not enough.
We need to set 
{code}DOCKER_EXTRAARGS=--env=EXCLUDE_TESTS_URL=${EXCLUDE_TESTS_URL}{code} like 
in https://builds.apache.org/job/PreCommit-HBASE-Build/configure



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Suggestion to speed up precommit - Reduce versions in Hadoop check

2017-12-11 Thread Apekshit Sharma
Hi

+1 hadoopcheck 52m 1s Patch does not cause any errors with Hadoop 2.6.1
2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4.

Almost 1 hr to check against 10 versions. And it's only going to increase
as more 2.6.x, 2.7.x and 3.0.x releases come out.

Suggestion here is simple, let's check against only the latest maintenance
release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
Advantage: Save ~40 min on pre-commit time.

Justification:
- We only do compile checks. Maintenance releases are not supposed to be
doing API breaking changes. So checking against maintenance release for
each minor version should be enough.
- We rarely see any hadoop check -1, and most recent ones have been due to
3.0. These will still be caught.
- Nightly can still check against all hadoop versions (since nightlies are
supposed to do holistic testing)
- Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) [10 days]:
  138 had +1 hadoopcheck
   15 had -1 hadoopcheck
  (others probably failed even before that - merge issue, etc)


Spot checking some
failures:[10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230]

10241: All 2.6.x failed. Others didn't run
10246: All 10 versions failed.
10184: All 2.6.x and 2.7.x failed. Others didn't run
10223: All 10 versions failed
10230: All 2.6.x failed. Others didn't run

Common pattern being, all maintenance versions fail together.
(idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's
irrelevant to this discussion).

What do you say - only check latest maintenance releases in precommit (and
let nightlies do holistic testing against all versions)?

-- Appy


[jira] [Created] (HBASE-19490) Rare failure in TestRateLimiter

2017-12-11 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-19490:
--

 Summary: Rare failure in TestRateLimiter
 Key: HBASE-19490
 URL: https://issues.apache.org/jira/browse/HBASE-19490
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.4.0
Reporter: Andrew Purtell
 Fix For: 1.4.1, 1.5.0


Maybe we aren't waiting long enough for a slow executor? Or it is some kind of 
race. Test usually passes.

[ERROR] Tests run: 15, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.01 s 
<<< FAILURE! - in org.apache.hadoop.hbase.quotas.TestRateLimiter
[ERROR] 
testOverconsumptionFixedIntervalRefillStrategy(org.apache.hadoop.hbase.quotas.TestRateLimiter)
  Time elapsed: 0.028 s  <<< FAILURE!
java.lang.AssertionError: expected:<1000> but was:<999>
at 
org.apache.hadoop.hbase.quotas.TestRateLimiter.testOverconsumptionFixedIntervalRefillStrategy(TestRateLimiter.java:122)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19489) Check against only the latest maintenance release in pre-commit hadoopcheck.

2017-12-11 Thread Appy (JIRA)
Appy created HBASE-19489:


 Summary: Check against only the latest maintenance release in 
pre-commit hadoopcheck.
 Key: HBASE-19489
 URL: https://issues.apache.org/jira/browse/HBASE-19489
 Project: HBase
  Issue Type: Improvement
Reporter: Appy
Assignee: Appy
Priority: Minor


(copied from dev thread)
{color:green}
| +1| hadoopcheck | 52m 1s |Patch does not cause any errors with 
Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. |
{color}
Almost 1 hr to check against 10 versions. And it's only going to increase as 
more 2.6.x, 2.7.x and 3.0.x releases come out.

Suggestion here is simple, let's check against only the latest maintenance 
release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
Advantage: Save ~40 min on pre-commit time.

Justification:
- We only do compile checks. Maintenance releases are not supposed to be doing 
API breaking changes. So checking against maintenance release for each minor 
version should be enough.
- We rarely see any hadoop check -1, and most recent ones have been due to 3.0. 
These will still be caught.
- Nightly can still check against all hadoop versions (since nightlies are 
supposed to do holistic testing)
- Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
  138 had +1 hadoopcheck
   15 had -1 hadoopcheck
  (others probably failed even before that - merge issue, etc)


Spot checking some 
failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)

10241: All 2.6.x failed. Others didn't run
10246: All 10 versions failed.
10184: All 2.6.x and 2.7.x failed. Others didn't run
10223: All 10 versions failed 
10230: All 2.6.x failed. Others didn't run
  
Common pattern being, all maintenance versions fail together.
(idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's irrelevant 
to this discussion).

What do you say - only check latest maintenance releases in precommit (and let 
nightlies do holistic testing against all versions)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Suggestion to speed up precommit - Reduce versions in Hadoop check

2017-12-11 Thread Ted Yu
bq. check against only the latest maintenance release for each minor
version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4

Makes sense.

For hadoop 3, we can build against 3.0.0-beta1

Cheers

On Mon, Dec 11, 2017 at 4:11 PM, Apekshit Sharma  wrote:

> Oh, btw, here's the little piece of code if anyone want's to analyze more.
>
> Script to collect precommit runs' console text.
>
> #!/bin/bash
>
> for i in `seq 10100 10300`; do
>   wget -a log -O ${i}
> https://builds.apache.org/job/PreCommit-HBASE-Build/${i}/consoleText
> done
>
> Number of failed runs:
> grep "|  -1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
> print i;}' | wc -l
>
> Number of passed runs:
> grep "|  +1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
> print i;}' | wc -l
>
> -- Appy
>
>
> On Mon, Dec 11, 2017 at 4:07 PM, Apekshit Sharma 
> wrote:
>
> > Hi
> >
> > +1 hadoopcheck 52m 1s Patch does not cause any errors with Hadoop 2.6.1
> > 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4.
> >
> > Almost 1 hr to check against 10 versions. And it's only going to increase
> > as more 2.6.x, 2.7.x and 3.0.x releases come out.
> >
> > Suggestion here is simple, let's check against only the latest
> maintenance
> > release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> > Advantage: Save ~40 min on pre-commit time.
> >
> > Justification:
> > - We only do compile checks. Maintenance releases are not supposed to be
> > doing API breaking changes. So checking against maintenance release for
> > each minor version should be enough.
> > - We rarely see any hadoop check -1, and most recent ones have been due
> to
> > 3.0. These will still be caught.
> > - Nightly can still check against all hadoop versions (since nightlies
> are
> > supposed to do holistic testing)
> > - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) [10 days]:
> >   138 had +1 hadoopcheck
> >15 had -1 hadoopcheck
> >   (others probably failed even before that - merge issue, etc)
> >
> >
> > Spot checking some failures:[10241,10246,10225,
> > 10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230]
> >
> > 10241: All 2.6.x failed. Others didn't run
> > 10246: All 10 versions failed.
> > 10184: All 2.6.x and 2.7.x failed. Others didn't run
> > 10223: All 10 versions failed
> > 10230: All 2.6.x failed. Others didn't run
> >
> > Common pattern being, all maintenance versions fail together.
> > (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's
> > irrelevant to this discussion).
> >
> > What do you say - only check latest maintenance releases in precommit
> (and
> > let nightlies do holistic testing against all versions)?
> >
> > -- Appy
> >
>
>
>
> --
>
> -- Appy
>


Re: Suggestion to speed up precommit - Reduce versions in Hadoop check

2017-12-11 Thread Josh Elser

+1

On 12/11/17 7:11 PM, Apekshit Sharma wrote:

Oh, btw, here's the little piece of code if anyone want's to analyze more.

Script to collect precommit runs' console text.

#!/bin/bash

for i in `seq 10100 10300`; do
   wget -a log -O ${i}
https://builds.apache.org/job/PreCommit-HBASE-Build/${i}/consoleText
done

Number of failed runs:
grep "|  -1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
print i;}' | wc -l

Number of passed runs:
grep "|  +1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
print i;}' | wc -l

-- Appy


On Mon, Dec 11, 2017 at 4:07 PM, Apekshit Sharma  wrote:


Hi

+1 hadoopcheck 52m 1s Patch does not cause any errors with Hadoop 2.6.1
2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4.

Almost 1 hr to check against 10 versions. And it's only going to increase
as more 2.6.x, 2.7.x and 3.0.x releases come out.

Suggestion here is simple, let's check against only the latest maintenance
release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
Advantage: Save ~40 min on pre-commit time.

Justification:
- We only do compile checks. Maintenance releases are not supposed to be
doing API breaking changes. So checking against maintenance release for
each minor version should be enough.
- We rarely see any hadoop check -1, and most recent ones have been due to
3.0. These will still be caught.
- Nightly can still check against all hadoop versions (since nightlies are
supposed to do holistic testing)
- Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) [10 days]:
   138 had +1 hadoopcheck
15 had -1 hadoopcheck
   (others probably failed even before that - merge issue, etc)


Spot checking some failures:[10241,10246,10225,
10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230]

10241: All 2.6.x failed. Others didn't run
10246: All 10 versions failed.
10184: All 2.6.x and 2.7.x failed. Others didn't run
10223: All 10 versions failed
10230: All 2.6.x failed. Others didn't run

Common pattern being, all maintenance versions fail together.
(idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's
irrelevant to this discussion).

What do you say - only check latest maintenance releases in precommit (and
let nightlies do holistic testing against all versions)?

-- Appy







Re: Suggestion to speed up precommit - Reduce versions in Hadoop check

2017-12-11 Thread Apekshit Sharma
Oh, btw, here's the little piece of code if anyone want's to analyze more.

Script to collect precommit runs' console text.

#!/bin/bash

for i in `seq 10100 10300`; do
  wget -a log -O ${i}
https://builds.apache.org/job/PreCommit-HBASE-Build/${i}/consoleText
done

Number of failed runs:
grep "|  -1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
print i;}' | wc -l

Number of passed runs:
grep "|  +1  |hadoopcheck" `ls 1*` | awk '{x[$1] = 1} END{for (i in x)
print i;}' | wc -l

-- Appy


On Mon, Dec 11, 2017 at 4:07 PM, Apekshit Sharma  wrote:

> Hi
>
> +1 hadoopcheck 52m 1s Patch does not cause any errors with Hadoop 2.6.1
> 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4.
>
> Almost 1 hr to check against 10 versions. And it's only going to increase
> as more 2.6.x, 2.7.x and 3.0.x releases come out.
>
> Suggestion here is simple, let's check against only the latest maintenance
> release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
> Advantage: Save ~40 min on pre-commit time.
>
> Justification:
> - We only do compile checks. Maintenance releases are not supposed to be
> doing API breaking changes. So checking against maintenance release for
> each minor version should be enough.
> - We rarely see any hadoop check -1, and most recent ones have been due to
> 3.0. These will still be caught.
> - Nightly can still check against all hadoop versions (since nightlies are
> supposed to do holistic testing)
> - Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) [10 days]:
>   138 had +1 hadoopcheck
>15 had -1 hadoopcheck
>   (others probably failed even before that - merge issue, etc)
>
>
> Spot checking some failures:[10241,10246,10225,
> 10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230]
>
> 10241: All 2.6.x failed. Others didn't run
> 10246: All 10 versions failed.
> 10184: All 2.6.x and 2.7.x failed. Others didn't run
> 10223: All 10 versions failed
> 10230: All 2.6.x failed. Others didn't run
>
> Common pattern being, all maintenance versions fail together.
> (idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's
> irrelevant to this discussion).
>
> What do you say - only check latest maintenance releases in precommit (and
> let nightlies do holistic testing against all versions)?
>
> -- Appy
>



-- 

-- Appy


[jira] [Created] (HBASE-19488) Remove Unused Code from CollectionUtils

2017-12-11 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HBASE-19488:
---

 Summary: Remove Unused Code from CollectionUtils
 Key: HBASE-19488
 URL: https://issues.apache.org/jira/browse/HBASE-19488
 Project: HBase
  Issue Type: Improvement
  Components: hbase
Affects Versions: 3.0.0
Reporter: BELUGA BEHR
Priority: Trivial


A bunch of unused code in CollectionUtils or code that can be found in Apache 
Commons libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Moving To SLF4J and Log4J2

2017-12-11 Thread dam6923 .
Just to clarify, I did not help with the migration... I've been
helping, piecemeal, to review comments for spelling, grammar,
contractions and to remove code guards in favor of parameters.

On Sat, Dec 9, 2017 at 11:22 PM, Stack  wrote:
> On Sat, Dec 9, 2017 at 6:03 PM, Apekshit Sharma  wrote:
>
>> +1 for dropping dependency which has been EOL for long now.
>>
>> What does the work here looks like? Change dependency, update properties
>> file, changed log messages, what else?
>>
>>
> The Hadoop issue has some prescription, regexes to run, etc. We should
> gauge it.
> S
>
>
>> Given upcoming beta1 release, what's the minimum work required to change
>> just the dependency? Is it possible to make code change (actual log lines)
>> separately/incrementally?
>>
>> @Beluga: Any gotchas from experiences in Hive?
>>
>> Thanks
>> -- Appy
>>


[RESULT][VOTE] First release candidate for HBase 1.1.13 (RC0) is available

2017-12-11 Thread Nick Dimiduk
This VOTE has passed, with 3x binding +1's, a non-binding +1, and a
non-binding -0. Thank you to everyone who voted on this release, and who
has participated over the last 18-ish months on making branch-1.1 a
successful, viable release line.

I'll go about the finishing touches.

Thanks,
Nick

On Mon, Dec 11, 2017 at 11:21 AM, Mike Drob  wrote:

> Let's call it a -0. :)
>
> On Mon, Dec 11, 2017 at 1:14 PM, Nick Dimiduk  wrote:
>
> > On Mon, Dec 11, 2017 at 8:20 AM, Mike Drob  wrote:
> >
> > > Yea, this candidate is fine to promote from my perspective and given
> the
> > > other votes cast. Thanks for putting this together, Nick!
> > >
> >
> > Mike,
> >
> > In that case, would you mind formally upgrading your vote from a -1? I'd
> > like to remove any ambiguity that may remain. With that done, I can call
> > the VOTE.
> >
> > Thanks,
> > Nick
> >
> > On Sun, Dec 10, 2017 at 7:11 PM, Nick Dimiduk 
> wrote:
> > >
> > > > At close of the period, this VOTE has received 3x binding +1's, a
> > > > non-binding +1, and a non-binding -1, with no other votes cast.
> > > >
> > > > My understanding is that the issues raised by the non-binding -1 are
> to
> > > be
> > > > taken as guidance for subsequent release lines and do not impact the
> > > > standing of this candidate.
> > > >
> > > > Mike, is that view consistent with your intentions?
> > > >
> > > > Thanks,
> > > > Nick
> > > >
> > > >
> > > > On Fri, Dec 8, 2017 at 9:00 PM, Nick Dimiduk 
> > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > - verified tarballs vs public key on people.apache.org.
> > > > > - extracted bin tgz:
> > > > >   - inspect structure. look good.
> > > > >   - with jdk1.8.0_65:
> > > > > - run LoadTestTool against standalone bin tgz with FAST_DIFF
> > block
> > > > > encoder and ROWCOL blooms. No issues, logs look good.
> > > > > - poked around webUI. looks good.
> > > > >   - load the site, browsed book.
> > > > > - extracted src tgz:
> > > > >   - inspect structure. look good.
> > > > >   - run LoadTestTool against standalone built from src tgz with
> > > FAST_DIFF
> > > > > block encoder and ROWCOL blooms. No issues, logs look good.
> > > > >   - poked around webUI. looks good.
> > > > > - ran the hbase-downstreamer project vs. the staged maven
> repository.
> > > > > tests pass.
> > > > >
> > > > > On Thu, Dec 7, 2017 at 1:44 PM, Ted Yu 
> wrote:
> > > > >
> > > > >> +1
> > > > >>
> > > > >> Checked sums and signatures: ok
> > > > >> Ran unit tests: passed
> > > > >> Started standalone cluster and did some basic operations
> > > > >>
> > > > >> On Thu, Dec 7, 2017 at 1:14 PM, Andrew Purtell <
> apurt...@apache.org
> > >
> > > > >> wrote:
> > > > >>
> > > > >> > +1
> > > > >> >
> > > > >> > Checked sums and signatures: ok
> > > > >> > Checked compat report: ok
> > > > >> > RAT check passed: ok (7u80)
> > > > >> > Built from source: ok (7u80)
> > > > >> > Unit tests pass: ok (8u131)
> > > > >> > 1M row LTT: ok (8u131)
> > > > >> >
> > > > >> >
> > > > >> > On Thu, Dec 7, 2017 at 8:40 AM, Nick Dimiduk <
> ndimi...@apache.org
> > >
> > > > >> wrote:
> > > > >> >
> > > > >> > > No one has voted a binding -1 with actionable changes, so as
> far
> > > as
> > > > >> I'm
> > > > >> > > concerned this RC remains valid. If people need more time, we
> > can
> > > > >> extend
> > > > >> > > this vote.
> > > > >> > >
> > > > >> > > Thanks,
> > > > >> > > Nick
> > > > >> > >
> > > > >> > > On Thu, Dec 7, 2017 at 8:07 AM, Ted Yu 
> > > wrote:
> > > > >> > >
> > > > >> > > > Nick:
> > > > >> > > > Originally you set tomorrow as deadline.
> > > > >> > > >
> > > > >> > > > Is there a new RC coming out (w.r.t. Mike's comment) ?
> > > > >> > > >
> > > > >> > > > Cheers
> > > > >> > > >
> > > > >> > > > On Mon, Dec 4, 2017 at 8:37 PM, Nick Dimiduk <
> > > ndimi...@apache.org
> > > > >
> > > > >> > > wrote:
> > > > >> > > >
> > > > >> > > > > Mike:
> > > > >> > > > >
> > > > >> > > > > > Do you plan to make a human-readable set of release
> notes
> > in
> > > > >> > addition
> > > > >> > > > to
> > > > >> > > > > the list of JIRA issues resolved?
> > > > >> > > > >
> > > > >> > > > > Not as such. For all branch-1.1 releases, I've written up
> a
> > > > little
> > > > >> > > > > human-friendly summary in the ANNOUNCE email. Basically,
> > > > >> expanding on
> > > > >> > > the
> > > > >> > > > > list of JIRA tickets I highlight in the RC notes to
> include
> > > > their
> > > > >> > full
> > > > >> > > > > ticket summaries. I haven't followed the details of the
> > > > branch-1.4
> > > > >> > > > release
> > > > >> > > > > line, so I'm not sure what additional information you
> might
> > be
> > > > >> hoping
> > > > >> > > > for.
> > > > >> > > > >
> > > > >> > > > > > tar missing hbase-native-client (present in tag)
> > > > >> > > > >
> > > > >> > > > > That's been the case since rel/1.1.0. We as a community
> 

[jira] [Reopened] (HBASE-17425) Fix calls to deprecated APIs in TestUpdateConfiguration

2017-12-11 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan reopened HBASE-17425:
--

[~Jan Hentschel],

Looks like HbaseTestingUtil.getAdmin() API is only in 2.x and should not be 
pushed to any of 1.x branches? Can you please revert this change? All branch-1 
builds are failing because of this patch. I can't do a fresh checkout of 
branch-1.3 or branch-1.4 and build.

See 
https://builds.apache.org/job/HBase-1.3-IT/it.test=IntegrationTestAcidGuarantees,jdk=JDK%201.8%20(latest),label=Hadoop/315/console

{noformat}
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 51.778s
[INFO] Finished at: Sat Dec 09 13:47:21 UTC 2017
[INFO] Final Memory: 151M/3543M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure:
[ERROR] warning: unknown enum constant When.UNKNOWN
[ERROR] reason: class file for javax.annotation.meta.When not found
[ERROR] 
/home/jenkins/jenkins-slave/workspace/HBase-1.3-IT/0407dd4a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestUpdateConfiguration.java:[52,27]
 error: cannot find symbol
[ERROR] symbol:   method getAdmin()
[ERROR] location: variable TEST_UTIL of type HBaseTestingUtility
[ERROR] 
/home/jenkins/jenkins-slave/workspace/HBase-1.3-IT/0407dd4a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestUpdateConfiguration.java:[68,27]
 error: cannot find symbol
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-server
{noformat}

cc [~apurtell] in case you have a problem building 1.4.

> Fix calls to deprecated APIs in TestUpdateConfiguration
> ---
>
> Key: HBASE-17425
> URL: https://issues.apache.org/jira/browse/HBASE-17425
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.0.0, 1.3.2, 1.4.1, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
> Attachments: HBASE-17425.master.001.patch
>
>
> Currently there are two calls to the deprecated method 
> {code:java}HBaseTestingUtil.getHBaseAdmin(){code} in 
> *TestUpdateConfiguration*. These calls should be changed to 
> {code:java}HBaseTestingUtil.getAdmin(){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Moving To SLF4J and Log4J2

2017-12-11 Thread Apekshit Sharma
Seems like good idea:
- remove long dead dependency
- a bit cleaner code
- hadoop also moved to slf4j

Quickly looking at codebase to get idea of amount of work required, here
are some numbers:
- LOG.debug : ~1800
- LOG.trace : ~500
- LOG.info: ~3000

Looking at this patch (
https://issues.apache.org/jira/secure/attachment/12901002/HBASE-19449.1.patch),
seemed like tedious and repetitive task, was wondering if someone has
automated it already.
Looks like this can help reduce a huge part of grunt work:
https://www.slf4j.org/migrator.html.

But before progressing, as a basic validation, can we see:
- an example of old vs new log lines (that there is no diff, or we are
comfortable with what's there)
- an example of changes in properties file

Maybe starting with hbase-examples module for quick POC.

-- Appy


[jira] [Created] (HBASE-19487) Remove IterablesUtil Class

2017-12-11 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HBASE-19487:
---

 Summary: Remove IterablesUtil Class
 Key: HBASE-19487
 URL: https://issues.apache.org/jira/browse/HBASE-19487
 Project: HBase
  Issue Type: Improvement
  Components: hbase
Affects Versions: 3.0.0
Reporter: BELUGA BEHR


Remove mostly unused and obsolete class {{IterablesUtil }}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] First release candidate for HBase 1.1.13 (RC0) is available

2017-12-11 Thread Mike Drob
Let's call it a -0. :)

On Mon, Dec 11, 2017 at 1:14 PM, Nick Dimiduk  wrote:

> On Mon, Dec 11, 2017 at 8:20 AM, Mike Drob  wrote:
>
> > Yea, this candidate is fine to promote from my perspective and given the
> > other votes cast. Thanks for putting this together, Nick!
> >
>
> Mike,
>
> In that case, would you mind formally upgrading your vote from a -1? I'd
> like to remove any ambiguity that may remain. With that done, I can call
> the VOTE.
>
> Thanks,
> Nick
>
> On Sun, Dec 10, 2017 at 7:11 PM, Nick Dimiduk  wrote:
> >
> > > At close of the period, this VOTE has received 3x binding +1's, a
> > > non-binding +1, and a non-binding -1, with no other votes cast.
> > >
> > > My understanding is that the issues raised by the non-binding -1 are to
> > be
> > > taken as guidance for subsequent release lines and do not impact the
> > > standing of this candidate.
> > >
> > > Mike, is that view consistent with your intentions?
> > >
> > > Thanks,
> > > Nick
> > >
> > >
> > > On Fri, Dec 8, 2017 at 9:00 PM, Nick Dimiduk 
> > wrote:
> > >
> > > > +1
> > > >
> > > > - verified tarballs vs public key on people.apache.org.
> > > > - extracted bin tgz:
> > > >   - inspect structure. look good.
> > > >   - with jdk1.8.0_65:
> > > > - run LoadTestTool against standalone bin tgz with FAST_DIFF
> block
> > > > encoder and ROWCOL blooms. No issues, logs look good.
> > > > - poked around webUI. looks good.
> > > >   - load the site, browsed book.
> > > > - extracted src tgz:
> > > >   - inspect structure. look good.
> > > >   - run LoadTestTool against standalone built from src tgz with
> > FAST_DIFF
> > > > block encoder and ROWCOL blooms. No issues, logs look good.
> > > >   - poked around webUI. looks good.
> > > > - ran the hbase-downstreamer project vs. the staged maven repository.
> > > > tests pass.
> > > >
> > > > On Thu, Dec 7, 2017 at 1:44 PM, Ted Yu  wrote:
> > > >
> > > >> +1
> > > >>
> > > >> Checked sums and signatures: ok
> > > >> Ran unit tests: passed
> > > >> Started standalone cluster and did some basic operations
> > > >>
> > > >> On Thu, Dec 7, 2017 at 1:14 PM, Andrew Purtell  >
> > > >> wrote:
> > > >>
> > > >> > +1
> > > >> >
> > > >> > Checked sums and signatures: ok
> > > >> > Checked compat report: ok
> > > >> > RAT check passed: ok (7u80)
> > > >> > Built from source: ok (7u80)
> > > >> > Unit tests pass: ok (8u131)
> > > >> > 1M row LTT: ok (8u131)
> > > >> >
> > > >> >
> > > >> > On Thu, Dec 7, 2017 at 8:40 AM, Nick Dimiduk  >
> > > >> wrote:
> > > >> >
> > > >> > > No one has voted a binding -1 with actionable changes, so as far
> > as
> > > >> I'm
> > > >> > > concerned this RC remains valid. If people need more time, we
> can
> > > >> extend
> > > >> > > this vote.
> > > >> > >
> > > >> > > Thanks,
> > > >> > > Nick
> > > >> > >
> > > >> > > On Thu, Dec 7, 2017 at 8:07 AM, Ted Yu 
> > wrote:
> > > >> > >
> > > >> > > > Nick:
> > > >> > > > Originally you set tomorrow as deadline.
> > > >> > > >
> > > >> > > > Is there a new RC coming out (w.r.t. Mike's comment) ?
> > > >> > > >
> > > >> > > > Cheers
> > > >> > > >
> > > >> > > > On Mon, Dec 4, 2017 at 8:37 PM, Nick Dimiduk <
> > ndimi...@apache.org
> > > >
> > > >> > > wrote:
> > > >> > > >
> > > >> > > > > Mike:
> > > >> > > > >
> > > >> > > > > > Do you plan to make a human-readable set of release notes
> in
> > > >> > addition
> > > >> > > > to
> > > >> > > > > the list of JIRA issues resolved?
> > > >> > > > >
> > > >> > > > > Not as such. For all branch-1.1 releases, I've written up a
> > > little
> > > >> > > > > human-friendly summary in the ANNOUNCE email. Basically,
> > > >> expanding on
> > > >> > > the
> > > >> > > > > list of JIRA tickets I highlight in the RC notes to include
> > > their
> > > >> > full
> > > >> > > > > ticket summaries. I haven't followed the details of the
> > > branch-1.4
> > > >> > > > release
> > > >> > > > > line, so I'm not sure what additional information you might
> be
> > > >> hoping
> > > >> > > > for.
> > > >> > > > >
> > > >> > > > > > tar missing hbase-native-client (present in tag)
> > > >> > > > >
> > > >> > > > > That's been the case since rel/1.1.0. We as a community
> have
> > > >> never
> > > >> > > > shipped
> > > >> > > > > a binary native client in this release line and we've never
> > > >> claimed
> > > >> > > that
> > > >> > > > > the native sources packaged herein are ready for production
> > > >> > > consumption.
> > > >> > > > > They probably should have been dropped from the branch
> before
> > > >> initial
> > > >> > > > > release, but that was not done. I have no objection to
> > dropping
> > > >> them
> > > >> > > > from a
> > > >> > > > > branch-1.1 release; from the git log, I see no commit
> activity
> > > to
> > > >> > that
> > > >> > > > > module since Jan 2014. I don't see any of this as a blocker
> 

Re: [VOTE] First release candidate for HBase 1.1.13 (RC0) is available

2017-12-11 Thread Nick Dimiduk
On Mon, Dec 11, 2017 at 8:20 AM, Mike Drob  wrote:

> Yea, this candidate is fine to promote from my perspective and given the
> other votes cast. Thanks for putting this together, Nick!
>

Mike,

In that case, would you mind formally upgrading your vote from a -1? I'd
like to remove any ambiguity that may remain. With that done, I can call
the VOTE.

Thanks,
Nick

On Sun, Dec 10, 2017 at 7:11 PM, Nick Dimiduk  wrote:
>
> > At close of the period, this VOTE has received 3x binding +1's, a
> > non-binding +1, and a non-binding -1, with no other votes cast.
> >
> > My understanding is that the issues raised by the non-binding -1 are to
> be
> > taken as guidance for subsequent release lines and do not impact the
> > standing of this candidate.
> >
> > Mike, is that view consistent with your intentions?
> >
> > Thanks,
> > Nick
> >
> >
> > On Fri, Dec 8, 2017 at 9:00 PM, Nick Dimiduk 
> wrote:
> >
> > > +1
> > >
> > > - verified tarballs vs public key on people.apache.org.
> > > - extracted bin tgz:
> > >   - inspect structure. look good.
> > >   - with jdk1.8.0_65:
> > > - run LoadTestTool against standalone bin tgz with FAST_DIFF block
> > > encoder and ROWCOL blooms. No issues, logs look good.
> > > - poked around webUI. looks good.
> > >   - load the site, browsed book.
> > > - extracted src tgz:
> > >   - inspect structure. look good.
> > >   - run LoadTestTool against standalone built from src tgz with
> FAST_DIFF
> > > block encoder and ROWCOL blooms. No issues, logs look good.
> > >   - poked around webUI. looks good.
> > > - ran the hbase-downstreamer project vs. the staged maven repository.
> > > tests pass.
> > >
> > > On Thu, Dec 7, 2017 at 1:44 PM, Ted Yu  wrote:
> > >
> > >> +1
> > >>
> > >> Checked sums and signatures: ok
> > >> Ran unit tests: passed
> > >> Started standalone cluster and did some basic operations
> > >>
> > >> On Thu, Dec 7, 2017 at 1:14 PM, Andrew Purtell 
> > >> wrote:
> > >>
> > >> > +1
> > >> >
> > >> > Checked sums and signatures: ok
> > >> > Checked compat report: ok
> > >> > RAT check passed: ok (7u80)
> > >> > Built from source: ok (7u80)
> > >> > Unit tests pass: ok (8u131)
> > >> > 1M row LTT: ok (8u131)
> > >> >
> > >> >
> > >> > On Thu, Dec 7, 2017 at 8:40 AM, Nick Dimiduk 
> > >> wrote:
> > >> >
> > >> > > No one has voted a binding -1 with actionable changes, so as far
> as
> > >> I'm
> > >> > > concerned this RC remains valid. If people need more time, we can
> > >> extend
> > >> > > this vote.
> > >> > >
> > >> > > Thanks,
> > >> > > Nick
> > >> > >
> > >> > > On Thu, Dec 7, 2017 at 8:07 AM, Ted Yu 
> wrote:
> > >> > >
> > >> > > > Nick:
> > >> > > > Originally you set tomorrow as deadline.
> > >> > > >
> > >> > > > Is there a new RC coming out (w.r.t. Mike's comment) ?
> > >> > > >
> > >> > > > Cheers
> > >> > > >
> > >> > > > On Mon, Dec 4, 2017 at 8:37 PM, Nick Dimiduk <
> ndimi...@apache.org
> > >
> > >> > > wrote:
> > >> > > >
> > >> > > > > Mike:
> > >> > > > >
> > >> > > > > > Do you plan to make a human-readable set of release notes in
> > >> > addition
> > >> > > > to
> > >> > > > > the list of JIRA issues resolved?
> > >> > > > >
> > >> > > > > Not as such. For all branch-1.1 releases, I've written up a
> > little
> > >> > > > > human-friendly summary in the ANNOUNCE email. Basically,
> > >> expanding on
> > >> > > the
> > >> > > > > list of JIRA tickets I highlight in the RC notes to include
> > their
> > >> > full
> > >> > > > > ticket summaries. I haven't followed the details of the
> > branch-1.4
> > >> > > > release
> > >> > > > > line, so I'm not sure what additional information you might be
> > >> hoping
> > >> > > > for.
> > >> > > > >
> > >> > > > > > tar missing hbase-native-client (present in tag)
> > >> > > > >
> > >> > > > > That's been the case since rel/1.1.0. We as a community have
> > >> never
> > >> > > > shipped
> > >> > > > > a binary native client in this release line and we've never
> > >> claimed
> > >> > > that
> > >> > > > > the native sources packaged herein are ready for production
> > >> > > consumption.
> > >> > > > > They probably should have been dropped from the branch before
> > >> initial
> > >> > > > > release, but that was not done. I have no objection to
> dropping
> > >> them
> > >> > > > from a
> > >> > > > > branch-1.1 release; from the git log, I see no commit activity
> > to
> > >> > that
> > >> > > > > module since Jan 2014. I don't see any of this as a blocker
> for
> > >> this
> > >> > > RC.
> > >> > > > >
> > >> > > > > > WARNING! HBase file layout needs to be upgraded ...
> > >> > > > >
> > >> > > > > When I test these RC's on a Mac, I explicitly set
> hbase.tmp.dir
> > >> to a
> > >> > > > > location specific to the candidate I've unpacked. This has the
> > >> > benefit
> > >> > > > > avoiding cross-version conflicts and other weirdness of Mac
> tmp
> > 

is it unnecessary for 'flushlock' in Store?

2017-12-11 Thread JH Lin
hi all, I read some code recently about the flow of flush-table which entried 
in HBaseAdmin#flush(tableOrRegion).
i found that at least four locks in this flow(0.94):
-region lock
-region updatelock
-hlog cacheFlushLock
-store flushlock
(awesome locks usages)

then i have certain questions in it:
A.why not flush stores concurrently?
-this has been fixed with hbase-6466 related by hbase-6980,well done!
B.why not flush regions concurrently? 
-as in some scenarios only one family in a table, the fixed question A will not 
gain any performance improved.
 so i dive in to master trunk and found that the flow has changed a lot :it has 
been delivered to master to do it(but i am not care the details in it in fact)
publicvoidflush(final TableName tableName)throws IOException {
1163checkTableExists(tableName);
1164if(isTableDisabled(tableName)) {
1165 LOG.info("Table is disabled: "+ tableName.getNameAsString());
1166return;
1167}
1168execProcedure("flush-table-proc", tableName.getNameAsString(),new 
HashMap<>());
1169}
publicbyte[]execProcedureWithReturn(String signature, String instance, 
Map props)
2744throws IOException {
2745 ProcedureDescription desc = 
ProtobufUtil.buildProcedureDescription(signature, instance, props);
2746final ExecProcedureRequest request =
2747 ExecProcedureRequest.newBuilder().setProcedure(desc).build();
2748// run the procedure on the master
2749 ExecProcedureResponse response =executeCallable(
2750new 
MasterCallable(getConnection(),getRpcControllerFactory())
 {
2751@Override
2752protected ExecProcedureResponse rpcCall()throws Exception {
2753return master.execProcedureWithRet(getRpcController(), request);
2754}
2755});
2756
2757return response.hasReturnData() ? response.getReturnData().toByteArray() : 
null;
2758}


C.is it the 'flushlock' unnecessary in Store? ( **this thread FOCUS**)
  since this lock is used in only one place(no other lock racers) 
:Store#internalFlushCache(),i.e..after grabbing scanner on snapshot and before 
finishing writing to hfile.
  and i saw this 'flushlock' retains in master trunk:
public ListflushSnapshot(MemStoreSnapshot snapshot,long cacheFlushId,
48 MonitoredTask status, ThroughputController throughputController,
49 FlushLifeCycleTracker tracker)throws IOException {
50 ArrayList result =new ArrayList<>();
51int cellsCount = snapshot.getCellsCount();
52if(cellsCount ==0)return result;// don't flush if there are no entries
53
54// Use a store scanner to find which rows to flush.
55long smallestReadPoint = store.getSmallestReadPoint();
56 InternalScanner scanner =createScanner(snapshot.getScanners(), 
smallestReadPoint, tracker);
57 StoreFileWriter writer;
58try{
59// TODO: We can fail in the below block before we complete adding this flush 
to
60// list of store files. Add cleanup of anything put on filesystem if we fail.
61synchronized(flushLock) {
62 status.setStatus("Flushing "+ store +": creating writer");
63// Write the map out to the disk
64 writer = store.createWriterInTmp(cellsCount, 
store.getColumnFamilyDescriptor().getCompressionType(),
65/* isCompaction = */ false,
66/* includeMVCCReadpoint = */ true,
67/* includesTags = */ snapshot.isTagsPresent(),
68/* shouldDropBehind = */ false);
69 IOException e = null;
70try{
71performFlush(scanner, writer, smallestReadPoint, throughputController);
72}catch(IOException ioe) {
73 e = ioe;
74// throw the exception out
75throw ioe;
76}finally{
77if(e != null) {
78 writer.close();
79}else{
80finalizeWriter(writer, cacheFlushId, status);
81}
82}
83}
84}finally{
85 scanner.close();
86}
87 LOG.info("Flushed, sequenceid="+ cacheFlushId +", memsize="
88+ 
StringUtils.TraditionalBinaryPrefix.long2String(snapshot.getDataSize(),"",1) +
89", hasBloomFilter="+ writer.hasGeneralBloom() +
90", into tmp file "+ writer.getPath());
91 result.add(writer.getPath());
92return result;
93}
94}


  in fact, the 'cacheFlushLock' will do this duty of it.so i think it's 
unnecessary or only for tests as i found a TestStore will call it :
internalFlushCache(SortedSet, long, TimeRangeTracker, AtomicLong, 
MonitoredTask) : Path - org.apache.hadoop.hbase.regionserver.Store
flushCache(long, SortedSet, TimeRangeTracker, AtomicLong, 
MonitoredTask) : Path - org.apache.hadoop.hbase.regionserver.Store
flushCache(MonitoredTask) : void - 
org.apache.hadoop.hbase.regionserver.Store.StoreFlusherImpl
flushStore(Store, long) : void - org.apache.hadoop.hbase.regionserver.TestStore
internalFlushcache(HLog, long, MonitoredTask) : boolean - 
org.apache.hadoop.hbase.regionserver.HRegion
  that means in normal flow only HRegion#intervalFlushcache() will call it but 
TestStore is a inserted flow(ie.unpossible flow)
  >>I just wonder is it redundant in fact though i think even if the removal of 
this lock will not significantly improve  performance .<<
any input is appreciated ,thanks
--JH Lin



Re: [VOTE] First release candidate for HBase 1.1.13 (RC0) is available

2017-12-11 Thread Mike Drob
Yea, this candidate is fine to promote from my perspective and given the
other votes cast. Thanks for putting this together, Nick!

On Sun, Dec 10, 2017 at 7:11 PM, Nick Dimiduk  wrote:

> At close of the period, this VOTE has received 3x binding +1's, a
> non-binding +1, and a non-binding -1, with no other votes cast.
>
> My understanding is that the issues raised by the non-binding -1 are to be
> taken as guidance for subsequent release lines and do not impact the
> standing of this candidate.
>
> Mike, is that view consistent with your intentions?
>
> Thanks,
> Nick
>
>
> On Fri, Dec 8, 2017 at 9:00 PM, Nick Dimiduk  wrote:
>
> > +1
> >
> > - verified tarballs vs public key on people.apache.org.
> > - extracted bin tgz:
> >   - inspect structure. look good.
> >   - with jdk1.8.0_65:
> > - run LoadTestTool against standalone bin tgz with FAST_DIFF block
> > encoder and ROWCOL blooms. No issues, logs look good.
> > - poked around webUI. looks good.
> >   - load the site, browsed book.
> > - extracted src tgz:
> >   - inspect structure. look good.
> >   - run LoadTestTool against standalone built from src tgz with FAST_DIFF
> > block encoder and ROWCOL blooms. No issues, logs look good.
> >   - poked around webUI. looks good.
> > - ran the hbase-downstreamer project vs. the staged maven repository.
> > tests pass.
> >
> > On Thu, Dec 7, 2017 at 1:44 PM, Ted Yu  wrote:
> >
> >> +1
> >>
> >> Checked sums and signatures: ok
> >> Ran unit tests: passed
> >> Started standalone cluster and did some basic operations
> >>
> >> On Thu, Dec 7, 2017 at 1:14 PM, Andrew Purtell 
> >> wrote:
> >>
> >> > +1
> >> >
> >> > Checked sums and signatures: ok
> >> > Checked compat report: ok
> >> > RAT check passed: ok (7u80)
> >> > Built from source: ok (7u80)
> >> > Unit tests pass: ok (8u131)
> >> > 1M row LTT: ok (8u131)
> >> >
> >> >
> >> > On Thu, Dec 7, 2017 at 8:40 AM, Nick Dimiduk 
> >> wrote:
> >> >
> >> > > No one has voted a binding -1 with actionable changes, so as far as
> >> I'm
> >> > > concerned this RC remains valid. If people need more time, we can
> >> extend
> >> > > this vote.
> >> > >
> >> > > Thanks,
> >> > > Nick
> >> > >
> >> > > On Thu, Dec 7, 2017 at 8:07 AM, Ted Yu  wrote:
> >> > >
> >> > > > Nick:
> >> > > > Originally you set tomorrow as deadline.
> >> > > >
> >> > > > Is there a new RC coming out (w.r.t. Mike's comment) ?
> >> > > >
> >> > > > Cheers
> >> > > >
> >> > > > On Mon, Dec 4, 2017 at 8:37 PM, Nick Dimiduk  >
> >> > > wrote:
> >> > > >
> >> > > > > Mike:
> >> > > > >
> >> > > > > > Do you plan to make a human-readable set of release notes in
> >> > addition
> >> > > > to
> >> > > > > the list of JIRA issues resolved?
> >> > > > >
> >> > > > > Not as such. For all branch-1.1 releases, I've written up a
> little
> >> > > > > human-friendly summary in the ANNOUNCE email. Basically,
> >> expanding on
> >> > > the
> >> > > > > list of JIRA tickets I highlight in the RC notes to include
> their
> >> > full
> >> > > > > ticket summaries. I haven't followed the details of the
> branch-1.4
> >> > > > release
> >> > > > > line, so I'm not sure what additional information you might be
> >> hoping
> >> > > > for.
> >> > > > >
> >> > > > > > tar missing hbase-native-client (present in tag)
> >> > > > >
> >> > > > > That's been the case since rel/1.1.0. We as a community have
> >> never
> >> > > > shipped
> >> > > > > a binary native client in this release line and we've never
> >> claimed
> >> > > that
> >> > > > > the native sources packaged herein are ready for production
> >> > > consumption.
> >> > > > > They probably should have been dropped from the branch before
> >> initial
> >> > > > > release, but that was not done. I have no objection to dropping
> >> them
> >> > > > from a
> >> > > > > branch-1.1 release; from the git log, I see no commit activity
> to
> >> > that
> >> > > > > module since Jan 2014. I don't see any of this as a blocker for
> >> this
> >> > > RC.
> >> > > > >
> >> > > > > > WARNING! HBase file layout needs to be upgraded ...
> >> > > > >
> >> > > > > When I test these RC's on a Mac, I explicitly set hbase.tmp.dir
> >> to a
> >> > > > > location specific to the candidate I've unpacked. This has the
> >> > benefit
> >> > > > > avoiding cross-version conflicts and other weirdness of Mac tmp
> >> > > directory
> >> > > > > management. For instance,
> >> > > > >
> >> > > > > 
> >> > > > >
> >> > > > > hbase.tmp.dir/tmp/hbase-1.1.
> >> > > > > 13/tmp
> >> > > > > 
> >> > > > >
> >> > > > > Peter:
> >> > > > >
> >> > > > > > In the logs I saw this line. Source code repository URL looks
> >> > > > incorrect.
> >> > > > > > 2017-12-04 10:13:27,028 INFO  [main] util.VersionInfo: Source
> >> code
> >> > > > > repository *git://diocles.local/Volumes/hbase-1.1.13/hbase*
> >> > > > > 

[jira] [Created] (HBASE-19486) Automalically flush a BufferedMutator after a timeout

2017-12-11 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-19486:


 Summary: Automalically flush a BufferedMutator after a timeout 
 Key: HBASE-19486
 URL: https://issues.apache.org/jira/browse/HBASE-19486
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Niels Basjes
Assignee: Niels Basjes


I'm working on several projects where we are doing stream / event type 
processing instead of batch type processing. We mostly use Apache Flink and 
Apache Beam for these projects.

When we ingest a continuous stream of events and feed that into HBase via a 
BufferedMutator this all works fine. The buffer fills up at a predictable rate 
and we can make sure it flushes several times per second into HBase by tuning 
the buffer size.

We also have situations where the event rate is unpredictable. Some times 
because the source is in reality a batch job that puts records into Kafka, 
sometimes because it is the "predictable in production" application in our 
testing environment (where only the dev triggers a handful of events).

For these kinds of use cases we need a way to 'force' the BufferedMutator to 
automatically flush any records in the buffer even if the buffer is not full.

I'll put up a pull request with a proposed implementation for review against 
the master (i.e. 3.0.0).
When approved I would like to backport this to the 1.x and 2.x versions of the 
client in the same (as close as possible) way.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)