[jira] [Resolved] (HBASE-17843) JUnit test timed out in TestRegionReplicaFailover.java

2017-04-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-17843.

Resolution: Not A Problem

> JUnit test timed out in TestRegionReplicaFailover.java
> --
>
> Key: HBASE-17843
> URL: https://issues.apache.org/jira/browse/HBASE-17843
> Project: HBase
>  Issue Type: Improvement
>Reporter: Qilin Cao
>Priority: Trivial
> Attachments: HBASE-17843-v1.patch
>
>
> Junit test sometimes failed in TestRegionReplicaFailover.java, so I changed 
> the testPrimaryRegionKill method  test timeout  to 24ms, and add sleep 
> 5000ms for verify result. 
> error logs:
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 285.221 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover
> testPrimaryRegionKill[0](org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover)
>   Time elapsed: 125.963 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
> at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:460)
>   at java.util.concurrent.TimeUnit.timedWait(TimeUnit.java:348)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.pollForSpecificCompletedTask(ResultBoundedCompletionService.java:258)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.pollForFirstSuccessfullyCompletedTask(ResultBoundedCompletionService.java:214)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.call(RpcRetryingCallerWithReadReplicas.java:209)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:428)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:392)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.verifyNumericRows(HBaseTestingUtility.java:2197)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover.verifyNumericRowsWithTimeout(TestRegionReplicaFailover.java:227)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover.testPrimaryRegionKill(TestRegionReplicaFailover.java:200)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17726) [C++] Move implementation from header to cc for request retry

2017-04-11 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-17726.
---
   Resolution: Fixed
 Assignee: Enis Soztutar  (was: Xiaobing Zhou)
Fix Version/s: HBASE-14850

Pushed this to the branch. Follow up patches depends on this. 

> [C++] Move implementation from header to cc for request retry
> -
>
> Key: HBASE-17726
> URL: https://issues.apache.org/jira/browse/HBASE-17726
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Enis Soztutar
> Fix For: HBASE-14850
>
> Attachments: hbase-17726_v1.patch, hbase-17726-v2.patch
>
>
> This is a follow up work related to HBASE-17465.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17901) HBase region server stops because of a failure during memstore flush

2017-04-11 Thread Raman Ch (JIRA)
Raman Ch created HBASE-17901:


 Summary: HBase region server stops because of a failure during 
memstore flush
 Key: HBASE-17901
 URL: https://issues.apache.org/jira/browse/HBASE-17901
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.2.2
 Environment: Ubuntu 14.04.5 LTS
HBase Version   1.2.2, revision=1
Reporter: Raman Ch


Once per several days region server fails to flush a memstore and stops.

April, 8:
{code}
2017-04-08 00:10:57,737 WARN  [MemStoreFlusher.1] regionserver.HStore: Failed 
flushing store file, retrying num=9
java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a column 
actually smaller than the previous column: 
at 
org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.java:117)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:464)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:119)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74)
at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:915)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2271)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2375)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2105)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2067)
at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1958)
at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1884)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:510)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$600(MemStoreFlusher.java:75)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:244)
at java.lang.Thread.run(Thread.java:745)
2017-04-08 00:10:57,737 FATAL [MemStoreFlusher.1] regionserver.HRegionServer: 
ABORTING region server datanode13.webmeup.com,16020,1491573320653: Replay of 
WAL required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region: 
di_ordinal_tmp,gov.ok.data/browse?page=2&category=Natural%20Resources&limitTo=datasets&tags=ed,1489764397211.9d7ca11018672c4aace7f30c8f4253f3.
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2428)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2105)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2067)
at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1958)
at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1884)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:510)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$600(MemStoreFlusher.java:75)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:244)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into 
a column actually smaller than the previous column: 
at 
org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.java:117)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:464)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:119)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74)
at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:915)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2271)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2375)
... 9 more
{code}

After region server restart it functioned properly for a couple of days.

April, 10:
{code}
2017-04-10 22:36:32,147 WARN  [MemStoreFlusher.0] regionserver.HStore: Failed 
flushing store file, retrying num=9
java.io.IOExcept

[jira] [Resolved] (HBASE-7292) HTablePool class description javadoc is unclear

2017-04-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-7292.
---
Resolution: Won't Fix

HTablePool had been removed in 0.98.1, so this jira should be closed.

> HTablePool class description javadoc is unclear
> ---
>
> Key: HBASE-7292
> URL: https://issues.apache.org/jira/browse/HBASE-7292
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gabriel Reid
>Priority: Minor
> Attachments: HBASE-7292.patch
>
>
> The class description javadoc for HTablePool contains a sentence that makes 
> no sense in the context (it appears to be part of an incorrectly-applied 
> patch from the past). The sentence references the correct way of returning 
> HTables to the pool, but it actually makes it more difficult to understand 
> what the correct way of returning tables to the pool actually is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17173) update ref guide links for discussions to use lists.apache.org

2017-04-11 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-17173.
-
Resolution: Duplicate

> update ref guide links for discussions to use lists.apache.org
> --
>
> Key: HBASE-17173
> URL: https://issues.apache.org/jira/browse/HBASE-17173
> Project: HBase
>  Issue Type: Task
>  Components: community, website
>Reporter: Sean Busbey
>Priority: Minor
>
> Right now the [reference guide|http://hbase.apache.org/book.html] has several 
> places where we link to discussions on dev@hbase to explain something or 
> document where a decision was made.
> Those links right now rely on "hadoop-search.com". we should update them to 
> link to the now-available [lists.apache.org view of the mailing 
> list|https://lists.apache.org/list.html?dev@hbase.apache.org] since it 
> provides conversation views and is an ASF resource.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-16380) clean up 0.94.y references

2017-04-11 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-16380.
-
Resolution: Duplicate
  Assignee: (was: Sean Busbey)

rolling this into a generalized "clean up EOM lines" issue.

> clean up 0.94.y references
> --
>
> Key: HBASE-16380
> URL: https://issues.apache.org/jira/browse/HBASE-16380
> Project: HBase
>  Issue Type: Task
>  Components: community, website
>Reporter: Sean Busbey
>
> consensus on [the DISCUSS thread seemed to be EOM for 
> 0.94|https://lists.apache.org/thread.html/547058fc59cb48130b35c80f86d41c28038195c99f5ed94e834e291c@%3Cdev.hbase.apache.org%3E]
> * announce on user@
> * remove 0.94.y from dist.apache
> * remove it from the ref guide
> * pare down unreleased 0.94.y versions from JIRA (save at most 1)
> * archive released 0.94.y versions in JIRA
> * disable / delete builds.a.o jobs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17902) Backport HBASE-16367 "Race between master and region server initialization may lead to premature server abort" to 1.3

2017-04-11 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17902:
--

 Summary: Backport HBASE-16367 "Race between master and region 
server initialization may lead to premature server abort" to 1.3
 Key: HBASE-17902
 URL: https://issues.apache.org/jira/browse/HBASE-17902
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
 Fix For: 1.3.2


This is to fix the case where hbase master always dies shortly after start.

It turned out that master initialization thread was racing with 
HRegionServer#preRegistrationInitialization() (initializeZooKeeper, actually) 
since HMaster extends HRegionServer.
Through additional logging in master:
{code}
this.oldLogDir = createInitialFileSystemLayout();
HFileSystem.addLocationsOrderInterceptor(conf);
LOG.info("creating splitLogManager");
{code}
I found that execution didn't reach the last log line before region server 
declared cluster Id being null.

branch-1.3 has been in quiet mode leading up to the release of 1.3.1
Once 1.3.1 is released, the fix can go into branch-1.3



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Failure: HBase Generate Website

2017-04-11 Thread Apache Jenkins Server
Build status: Failure

The HBase website has not been updated to incorporate HBase commit 
${HBASE_GIT_SHA}.

See https://builds.apache.org/job/hbase_generate_website/955/console




Re: [DISCUSS] More Shading

2017-04-11 Thread Stack
Let me revive this thread.

Recall, we are stuck on old or particular versions of critical libs. We are
unable to update because our versions will clash w/ versions from
upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client. We need
to message downstreamers that they should use it going forward.  This will
help going forward but it will not inoculate our internals nor an existing
context where we'd like to be a compatible drop-in.

We could try hackery filtering transitive includes up in poms for each
version of hadoop/spark that we support but in the end, its a bunch of
effort, hard to test, and we are unable to dictate the CLASSPATH order in
all situations.

We could try some shading voodoo inline w/ build. Because shading is a
post-package step and because we are modularized and shading includes the
shaded classes in the artifact produced, we'd end up w/ multiple copies of
guava/netty/etc. classes, an instance per module that makes a reference.

Lets do Sean's idea of a pre-build step where we package and relocate
('shade') critical dependencies (Going by the thread above, Ram, Anoop, and
Andy seems good w/ general idea).

In implementation, we (The HBase PMC) would ask for a new repo [1]. In here
we'd create a new mvn project. This project would produce a single artifact
(jar) called hbase-dependencies or hbase-3rdparty or hbase-shaded-3rdparty
libs. In it would be relocated core libs such as guava and netty (and maybe
protobuf). We'd publish this artifact and then have hbase depend on it
changing all references to point at the relocation: e.g. rather than import
com.google.common.collect.Maps, we'd import
org.apache.hadoop.hbase.com.google.common.collect.Maps.

We (The HBase PMC) will have to make releases of this new artifact and vote
on them. I think it will be a relatively rare event.

I'd be up for doing the first cut if folks are game.

St.Ack


1. URL via Sean but for committers to view only: https://reporeq.apache.org/

On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> +1 for Sean's ideas. Bundling all the dependent libraries and shading them
> into one jar and HBase referring to it makes sense and should avoid some of
> the pain in terms of IDE usage. Stack's doc clearly talks about the IDE
> issues that we may get after this protobuf shading goes in. It may be
> difficult for new comers and those who don't know this background of why it
> has to be like that.
>
> Regards
> Ram
>
> On Sun, Oct 2, 2016 at 10:51 AM, Stack  wrote:
>
> > On Sat, Oct 1, 2016 at 6:32 PM, Jerry He  wrote:
> >
> > > How is the proposed going to impact the existing shaded-client and
> > > shaded-server modules, making them unnecessary and go away?
> > >
> >
> > No. We still need the blanket shading of hbase client and server.
> >
> > This effort is about our internals. We have a mess of other components
> all
> > up inside us such as HDFS, etc., each with their own sets of dependencies
> > many of which we have in common. This project t is about making it so we
> > can upgrade at a rate independent of when our upstreamers choose to
> change.
> >
> >
> > > It doesn't seem so.  These modules are supposed to shade HBase and
> > upstream
> > > from downstream users.
> > >
> >
> > Agree.
> >
> > Thanks for drawing out the difference between these two shading efforts,
> >
> > St.Ack
> >
> >
> >
> > > Thanks.
> > >
> > > Jerry
> > >
> > > On Sat, Oct 1, 2016 at 2:33 PM, Andrew Purtell <
> andrew.purt...@gmail.com
> > >
> > > wrote:
> > >
> > > > > Sean has suggested a pre-build step where in another repo we'd make
> > > hbase
> > > > > shaded versions of critical libs, 'release' them (votes, etc.) and
> > then
> > > > > have core depend on these. It be a bunch of work but would make the
> > > dev's
> > > > > life easier.
> > > >
> > > > So when we make changes that require updates to and rebuild of the
> > > > supporting libraries, as a developer I would make local changes,
> > install
> > > a
> > > > snapshot of that into the local maven cache, then point the HBase
> build
> > > at
> > > > the snapshot, then do the other half of the work, then push up to
> both?
> > > >
> > > > I think this could work.
> > >
> >
>


Still Failing: HBase Generate Website

2017-04-11 Thread Apache Jenkins Server
Build status: Still Failing

The HBase website has not been updated to incorporate HBase commit 
${HBASE_GIT_SHA}.

See https://builds.apache.org/job/hbase_generate_website/956/console




[jira] [Resolved] (HBASE-7456) Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads

2017-04-11 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-7456.
--
Resolution: Won't Fix

HTP has been removed. 0.94 is EOL.

> Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads
> 
>
> Key: HBASE-7456
> URL: https://issues.apache.org/jira/browse/HBASE-7456
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.94.19
>Reporter: Chip Salzenberg
>Priority: Minor
> Attachments: HBASE-7456-0.94.patch, HBASE-7456-trunk.patch
>
>
> Please allow the Configuration to override the hard-coded maxSize of 10 for 
> its HTablePool.  Under high loads, 10 is too small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] More Shading

2017-04-11 Thread York, Zach
+1 (non-binding)

This sounds like a good idea to me!

Zach

On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" 
 wrote:

Let me revive this thread.

Recall, we are stuck on old or particular versions of critical libs. We are
unable to update because our versions will clash w/ versions from
upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client. We need
to message downstreamers that they should use it going forward.  This will
help going forward but it will not inoculate our internals nor an existing
context where we'd like to be a compatible drop-in.

We could try hackery filtering transitive includes up in poms for each
version of hadoop/spark that we support but in the end, its a bunch of
effort, hard to test, and we are unable to dictate the CLASSPATH order in
all situations.

We could try some shading voodoo inline w/ build. Because shading is a
post-package step and because we are modularized and shading includes the
shaded classes in the artifact produced, we'd end up w/ multiple copies of
guava/netty/etc. classes, an instance per module that makes a reference.

Lets do Sean's idea of a pre-build step where we package and relocate
('shade') critical dependencies (Going by the thread above, Ram, Anoop, and
Andy seems good w/ general idea).

In implementation, we (The HBase PMC) would ask for a new repo [1]. In here
we'd create a new mvn project. This project would produce a single artifact
(jar) called hbase-dependencies or hbase-3rdparty or hbase-shaded-3rdparty
libs. In it would be relocated core libs such as guava and netty (and maybe
protobuf). We'd publish this artifact and then have hbase depend on it
changing all references to point at the relocation: e.g. rather than import
com.google.common.collect.Maps, we'd import
org.apache.hadoop.hbase.com.google.common.collect.Maps.

We (The HBase PMC) will have to make releases of this new artifact and vote
on them. I think it will be a relatively rare event.

I'd be up for doing the first cut if folks are game.

St.Ack


1. URL via Sean but for committers to view only: https://reporeq.apache.org/

On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> +1 for Sean's ideas. Bundling all the dependent libraries and shading them
> into one jar and HBase referring to it makes sense and should avoid some 
of
> the pain in terms of IDE usage. Stack's doc clearly talks about the IDE
> issues that we may get after this protobuf shading goes in. It may be
> difficult for new comers and those who don't know this background of why 
it
> has to be like that.
>
> Regards
> Ram
>
> On Sun, Oct 2, 2016 at 10:51 AM, Stack  wrote:
>
> > On Sat, Oct 1, 2016 at 6:32 PM, Jerry He  wrote:
> >
> > > How is the proposed going to impact the existing shaded-client and
> > > shaded-server modules, making them unnecessary and go away?
> > >
> >
> > No. We still need the blanket shading of hbase client and server.
> >
> > This effort is about our internals. We have a mess of other components
> all
> > up inside us such as HDFS, etc., each with their own sets of 
dependencies
> > many of which we have in common. This project t is about making it so we
> > can upgrade at a rate independent of when our upstreamers choose to
> change.
> >
> >
> > > It doesn't seem so.  These modules are supposed to shade HBase and
> > upstream
> > > from downstream users.
> > >
> >
> > Agree.
> >
> > Thanks for drawing out the difference between these two shading efforts,
> >
> > St.Ack
> >
> >
> >
> > > Thanks.
> > >
> > > Jerry
> > >
> > > On Sat, Oct 1, 2016 at 2:33 PM, Andrew Purtell <
> andrew.purt...@gmail.com
> > >
> > > wrote:
> > >
> > > > > Sean has suggested a pre-build step where in another repo we'd 
make
> > > hbase
> > > > > shaded versions of critical libs, 'release' them (votes, etc.) and
> > then
> > > > > have core depend on these. It be a bunch of work but would make 
the
> > > dev's
> > > > > life easier.
> > > >
> > > > So when we make changes that require updates to and rebuild of the
> > > > supporting libraries, as a developer I would make local changes,
> > install
> > > a
> > > > snapshot of that into the local maven cache, then point the HBase
> build
> > > at
> > > > the snapshot, then do the other half of the work, then push up to
> both?
> > > >
> > > > I think this could work.
> > >
> >
>




Re: [DISCUSS] More Shading

2017-04-11 Thread York, Zach
Should we allow dependent projects (such as Phoenix) to weigh in on this issue 
since they are likely going to be the ones that benefit/are effected?

On 4/11/17, 10:17 AM, "York, Zach"  wrote:

+1 (non-binding)

This sounds like a good idea to me!

Zach

On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" 
 wrote:

Let me revive this thread.

Recall, we are stuck on old or particular versions of critical libs. We 
are
unable to update because our versions will clash w/ versions from
upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client. We 
need
to message downstreamers that they should use it going forward.  This 
will
help going forward but it will not inoculate our internals nor an 
existing
context where we'd like to be a compatible drop-in.

We could try hackery filtering transitive includes up in poms for each
version of hadoop/spark that we support but in the end, its a bunch of
effort, hard to test, and we are unable to dictate the CLASSPATH order 
in
all situations.

We could try some shading voodoo inline w/ build. Because shading is a
post-package step and because we are modularized and shading includes 
the
shaded classes in the artifact produced, we'd end up w/ multiple copies 
of
guava/netty/etc. classes, an instance per module that makes a reference.

Lets do Sean's idea of a pre-build step where we package and relocate
('shade') critical dependencies (Going by the thread above, Ram, Anoop, 
and
Andy seems good w/ general idea).

In implementation, we (The HBase PMC) would ask for a new repo [1]. In 
here
we'd create a new mvn project. This project would produce a single 
artifact
(jar) called hbase-dependencies or hbase-3rdparty or 
hbase-shaded-3rdparty
libs. In it would be relocated core libs such as guava and netty (and 
maybe
protobuf). We'd publish this artifact and then have hbase depend on it
changing all references to point at the relocation: e.g. rather than 
import
com.google.common.collect.Maps, we'd import
org.apache.hadoop.hbase.com.google.common.collect.Maps.

We (The HBase PMC) will have to make releases of this new artifact and 
vote
on them. I think it will be a relatively rare event.

I'd be up for doing the first cut if folks are game.

St.Ack


1. URL via Sean but for committers to view only: 
https://reporeq.apache.org/

On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> +1 for Sean's ideas. Bundling all the dependent libraries and shading 
them
> into one jar and HBase referring to it makes sense and should avoid 
some of
> the pain in terms of IDE usage. Stack's doc clearly talks about the 
IDE
> issues that we may get after this protobuf shading goes in. It may be
> difficult for new comers and those who don't know this background of 
why it
> has to be like that.
>
> Regards
> Ram
>
> On Sun, Oct 2, 2016 at 10:51 AM, Stack  wrote:
>
> > On Sat, Oct 1, 2016 at 6:32 PM, Jerry He  wrote:
> >
> > > How is the proposed going to impact the existing shaded-client and
> > > shaded-server modules, making them unnecessary and go away?
> > >
> >
> > No. We still need the blanket shading of hbase client and server.
> >
> > This effort is about our internals. We have a mess of other 
components
> all
> > up inside us such as HDFS, etc., each with their own sets of 
dependencies
> > many of which we have in common. This project t is about making it 
so we
> > can upgrade at a rate independent of when our upstreamers choose to
> change.
> >
> >
> > > It doesn't seem so.  These modules are supposed to shade HBase and
> > upstream
> > > from downstream users.
> > >
> >
> > Agree.
> >
> > Thanks for drawing out the difference between these two shading 
efforts,
> >
> > St.Ack
> >
> >
> >
> > > Thanks.
> > >
> > > Jerry
> > >
> > > On Sat, Oct 1, 2016 at 2:33 PM, Andrew Purtell <
> andrew.purt...@gmail.com
> > >
> > > wrote:
> > >
> > > > > Sean has suggested a pre-build step where in another repo 
we'd make
> > > hbase
> > > > > shaded versions of critical libs, 'release' them (votes, 
etc.) and
> > then
> > > > > have core depend on these. It be a bunch of work but would 
make the
> > > de

[jira] [Created] (HBASE-17903) The alias for the link of HBASE-6580 is incorrect

2017-04-11 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-17903:
--

 Summary: The alias for the link of HBASE-6580 is incorrect
 Key: HBASE-17903
 URL: https://issues.apache.org/jira/browse/HBASE-17903
 Project: HBase
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Priority: Trivial


{noformat}
Previous versions of this guide discussed `HTablePool`, which was deprecated in 
HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by 
link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500], or 
`HConnection`, which is deprecated in HBase 1.0 by `Connection`.
Please use 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection]
 instead.
{noformat}

6500 -> 6580 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17904) Get runs into NoSuchElementException when using Read Replica, with hbase. ipc.client.specificThreadForWriting to be true and hbase.rpc.client.impl to be org.apache.hadoo

2017-04-11 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-17904:


 Summary: Get runs into NoSuchElementException when using Read 
Replica, with hbase. ipc.client.specificThreadForWriting to be true and 
hbase.rpc.client.impl to be org.apache.hadoop.hbase.ipc.RpcClientImpl
 Key: HBASE-17904
 URL: https://issues.apache.org/jira/browse/HBASE-17904
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0
Reporter: huaxiang sun
Assignee: huaxiang sun


When testing read replica with 2.0.0 code, with the following config
{code}
  
hbase.ipc.client.specificThreadForWriting
true
  
  
hbase.rpc.client.impl
org.apache.hadoop.hbase.ipc.RpcClientImpl
  
{code}

The hbase client runs into the following exception
{code}
Exception in thread "main" java.util.NoSuchElementException
at java.util.ArrayDeque.removeFirst(ArrayDeque.java:280)
at java.util.ArrayDeque.remove(ArrayDeque.java:447)
at 
org.apache.hadoop.hbase.ipc.BlockingRpcConnection$CallSender.remove(BlockingRpcConnection.java:159)
at 
org.apache.hadoop.hbase.ipc.BlockingRpcConnection$3.run(BlockingRpcConnection.java:760)
at 
org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl.startCancel(HBaseRpcControllerImpl.java:229)
at 
org.apache.hadoop.hbase.client.CancellableRegionServerCallable.cancel(CancellableRegionServerCallable.java:86)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.cancel(ResultBoundedCompletionService.java:106)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService.cancelAll(ResultBoundedCompletionService.java:274)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.call(RpcRetryingCallerWithReadReplicas.java:224)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:445)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:409)
at HBaseThreadedGet.doWork(HBaseThreadedGet.java:45)
at HBaseThreadedGet.main(HBaseThreadedGet.java:19)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] More Shading

2017-04-11 Thread Jesse Yates
> would ask for a new repo [1]. In here we'd create a new mvn project.

Why get a new repo? A different (new) HBase mvn module that is depended
upon via other modules should cover it, IIRC. That module can handle all
the shading and not include transitive dependencies. Then in "downstream
modules" you should be able to just use the shaded classes. Building would
require doing a 'mvn install', but that's nothing new.

If this was going to support the client I'd be concerned with size of the
resulting jar, with all the potential dependencies, but meh - its the
server only!

Just my $0.02,
Jesse

On Tue, Apr 11, 2017 at 10:23 AM York, Zach  wrote:

> Should we allow dependent projects (such as Phoenix) to weigh in on this
> issue since they are likely going to be the ones that benefit/are effected?
>
> On 4/11/17, 10:17 AM, "York, Zach"  wrote:
>
> +1 (non-binding)
>
> This sounds like a good idea to me!
>
> Zach
>
> On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" <
> saint@gmail.com on behalf of st...@duboce.net> wrote:
>
> Let me revive this thread.
>
> Recall, we are stuck on old or particular versions of critical
> libs. We are
> unable to update because our versions will clash w/ versions from
> upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client.
> We need
> to message downstreamers that they should use it going forward.
> This will
> help going forward but it will not inoculate our internals nor an
> existing
> context where we'd like to be a compatible drop-in.
>
> We could try hackery filtering transitive includes up in poms for
> each
> version of hadoop/spark that we support but in the end, its a
> bunch of
> effort, hard to test, and we are unable to dictate the CLASSPATH
> order in
> all situations.
>
> We could try some shading voodoo inline w/ build. Because shading
> is a
> post-package step and because we are modularized and shading
> includes the
> shaded classes in the artifact produced, we'd end up w/ multiple
> copies of
> guava/netty/etc. classes, an instance per module that makes a
> reference.
>
> Lets do Sean's idea of a pre-build step where we package and
> relocate
> ('shade') critical dependencies (Going by the thread above, Ram,
> Anoop, and
> Andy seems good w/ general idea).
>
> In implementation, we (The HBase PMC) would ask for a new repo
> [1]. In here
> we'd create a new mvn project. This project would produce a single
> artifact
> (jar) called hbase-dependencies or hbase-3rdparty or
> hbase-shaded-3rdparty
> libs. In it would be relocated core libs such as guava and netty
> (and maybe
> protobuf). We'd publish this artifact and then have hbase depend
> on it
> changing all references to point at the relocation: e.g. rather
> than import
> com.google.common.collect.Maps, we'd import
> org.apache.hadoop.hbase.com.google.common.collect.Maps.
>
> We (The HBase PMC) will have to make releases of this new artifact
> and vote
> on them. I think it will be a relatively rare event.
>
> I'd be up for doing the first cut if folks are game.
>
> St.Ack
>
>
> 1. URL via Sean but for committers to view only:
> https://reporeq.apache.org/
>
> On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
> ramkrishna.s.vasude...@gmail.com> wrote:
>
> > +1 for Sean's ideas. Bundling all the dependent libraries and
> shading them
> > into one jar and HBase referring to it makes sense and should
> avoid some of
> > the pain in terms of IDE usage. Stack's doc clearly talks about
> the IDE
> > issues that we may get after this protobuf shading goes in. It
> may be
> > difficult for new comers and those who don't know this
> background of why it
> > has to be like that.
> >
> > Regards
> > Ram
> >
> > On Sun, Oct 2, 2016 at 10:51 AM, Stack  wrote:
> >
> > > On Sat, Oct 1, 2016 at 6:32 PM, Jerry He 
> wrote:
> > >
> > > > How is the proposed going to impact the existing
> shaded-client and
> > > > shaded-server modules, making them unnecessary and go away?
> > > >
> > >
> > > No. We still need the blanket shading of hbase client and
> server.
> > >
> > > This effort is about our internals. We have a mess of other
> components
> > all
> > > up inside us such as HDFS, etc., each with their own sets of
> dependencies
> > > many of which we have in common. This project t is about
> making it so we
> > > can upgrade at a rate independent of when our upstreamers
> choose to
> > change.
> > >
> > >
> > > > It doesn't seem so.  These modules are supposed to shade
> HBase and
>  

[jira] [Resolved] (HBASE-17438) Server side change to accommodate limit by number of mutations

2017-04-11 Thread Janos Gub (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janos Gub resolved HBASE-17438.
---
Resolution: Duplicate

> Server side change to accommodate limit by number of mutations
> --
>
> Key: HBASE-17438
> URL: https://issues.apache.org/jira/browse/HBASE-17438
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Janos Gub
>
> HBASE-17408 introduced per request limit by number of mutations for the 
> client.
> This JIRA is to add support on server side, in similar way to HBASE-14946.
> Server side support would keep a counter for the mutations. When the counter 
> exceeds threshold for limit of number of mutations, exception would be 
> returned to client so that client retries the remaining mutations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17838) Replace fixed Executor Threads with dynamic thread pool

2017-04-11 Thread Janos Gub (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janos Gub resolved HBASE-17838.
---
Resolution: Won't Fix

> Replace fixed Executor Threads with dynamic thread pool 
> 
>
> Key: HBASE-17838
> URL: https://issues.apache.org/jira/browse/HBASE-17838
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, proc-v2
>Reporter: Janos Gub
>Assignee: Janos Gub
> Fix For: 2.0.0
>
> Attachments: initial.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] More Shading

2017-04-11 Thread Sean Busbey
A new module probably won't work due to the fact that we need to reference
the relocated classes in source code and maven won't do that until the
"package" phase.

IDEs in particular will barf all over the place.

On Tue, Apr 11, 2017 at 1:04 PM Jesse Yates  wrote:

> > would ask for a new repo [1]. In here we'd create a new mvn project.
>
> Why get a new repo? A different (new) HBase mvn module that is depended
> upon via other modules should cover it, IIRC. That module can handle all
> the shading and not include transitive dependencies. Then in "downstream
> modules" you should be able to just use the shaded classes. Building would
> require doing a 'mvn install', but that's nothing new.
>
> If this was going to support the client I'd be concerned with size of the
> resulting jar, with all the potential dependencies, but meh - its the
> server only!
>
> Just my $0.02,
> Jesse
>
> On Tue, Apr 11, 2017 at 10:23 AM York, Zach  wrote:
>
> > Should we allow dependent projects (such as Phoenix) to weigh in on this
> > issue since they are likely going to be the ones that benefit/are
> effected?
> >
> > On 4/11/17, 10:17 AM, "York, Zach"  wrote:
> >
> > +1 (non-binding)
> >
> > This sounds like a good idea to me!
> >
> > Zach
> >
> > On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" <
> > saint@gmail.com on behalf of st...@duboce.net> wrote:
> >
> > Let me revive this thread.
> >
> > Recall, we are stuck on old or particular versions of critical
> > libs. We are
> > unable to update because our versions will clash w/ versions from
> > upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client.
> > We need
> > to message downstreamers that they should use it going forward.
> > This will
> > help going forward but it will not inoculate our internals nor an
> > existing
> > context where we'd like to be a compatible drop-in.
> >
> > We could try hackery filtering transitive includes up in poms for
> > each
> > version of hadoop/spark that we support but in the end, its a
> > bunch of
> > effort, hard to test, and we are unable to dictate the CLASSPATH
> > order in
> > all situations.
> >
> > We could try some shading voodoo inline w/ build. Because shading
> > is a
> > post-package step and because we are modularized and shading
> > includes the
> > shaded classes in the artifact produced, we'd end up w/ multiple
> > copies of
> > guava/netty/etc. classes, an instance per module that makes a
> > reference.
> >
> > Lets do Sean's idea of a pre-build step where we package and
> > relocate
> > ('shade') critical dependencies (Going by the thread above, Ram,
> > Anoop, and
> > Andy seems good w/ general idea).
> >
> > In implementation, we (The HBase PMC) would ask for a new repo
> > [1]. In here
> > we'd create a new mvn project. This project would produce a
> single
> > artifact
> > (jar) called hbase-dependencies or hbase-3rdparty or
> > hbase-shaded-3rdparty
> > libs. In it would be relocated core libs such as guava and netty
> > (and maybe
> > protobuf). We'd publish this artifact and then have hbase depend
> > on it
> > changing all references to point at the relocation: e.g. rather
> > than import
> > com.google.common.collect.Maps, we'd import
> > org.apache.hadoop.hbase.com.google.common.collect.Maps.
> >
> > We (The HBase PMC) will have to make releases of this new
> artifact
> > and vote
> > on them. I think it will be a relatively rare event.
> >
> > I'd be up for doing the first cut if folks are game.
> >
> > St.Ack
> >
> >
> > 1. URL via Sean but for committers to view only:
> > https://reporeq.apache.org/
> >
> > On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
> > ramkrishna.s.vasude...@gmail.com> wrote:
> >
> > > +1 for Sean's ideas. Bundling all the dependent libraries and
> > shading them
> > > into one jar and HBase referring to it makes sense and should
> > avoid some of
> > > the pain in terms of IDE usage. Stack's doc clearly talks about
> > the IDE
> > > issues that we may get after this protobuf shading goes in. It
> > may be
> > > difficult for new comers and those who don't know this
> > background of why it
> > > has to be like that.
> > >
> > > Regards
> > > Ram
> > >
> > > On Sun, Oct 2, 2016 at 10:51 AM, Stack 
> wrote:
> > >
> > > > On Sat, Oct 1, 2016 at 6:32 PM, Jerry He  >
> > wrote:
> > > >
> > > > > How is the proposed going to impact the existing
> > shaded-client and
> > > > > shaded-server modules, making them unnecessary and go away?
> > > > >
> > > >
> > > > No. We still need the blanket shading of hbase client and
> > serv

Re: [DISCUSS] More Shading

2017-04-11 Thread Jesse Yates
Right, hence the install/package phase (probably with -DskipTests) first.
Probably only have to do this occasionally, as dependencies change.

Agree the IDEs are really unhappy with this though.

Seems like more headache to create another repo, but I'm not too tied
either way. Just asking. Thanks Sean.

-J

On Tue, Apr 11, 2017 at 12:24 PM Sean Busbey  wrote:

> A new module probably won't work due to the fact that we need to reference
> the relocated classes in source code and maven won't do that until the
> "package" phase.
>
> IDEs in particular will barf all over the place.
>
> On Tue, Apr 11, 2017 at 1:04 PM Jesse Yates 
> wrote:
>
> > > would ask for a new repo [1]. In here we'd create a new mvn project.
> >
> > Why get a new repo? A different (new) HBase mvn module that is depended
> > upon via other modules should cover it, IIRC. That module can handle all
> > the shading and not include transitive dependencies. Then in "downstream
> > modules" you should be able to just use the shaded classes. Building
> would
> > require doing a 'mvn install', but that's nothing new.
> >
> > If this was going to support the client I'd be concerned with size of the
> > resulting jar, with all the potential dependencies, but meh - its the
> > server only!
> >
> > Just my $0.02,
> > Jesse
> >
> > On Tue, Apr 11, 2017 at 10:23 AM York, Zach  wrote:
> >
> > > Should we allow dependent projects (such as Phoenix) to weigh in on
> this
> > > issue since they are likely going to be the ones that benefit/are
> > effected?
> > >
> > > On 4/11/17, 10:17 AM, "York, Zach"  wrote:
> > >
> > > +1 (non-binding)
> > >
> > > This sounds like a good idea to me!
> > >
> > > Zach
> > >
> > > On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" <
> > > saint@gmail.com on behalf of st...@duboce.net> wrote:
> > >
> > > Let me revive this thread.
> > >
> > > Recall, we are stuck on old or particular versions of critical
> > > libs. We are
> > > unable to update because our versions will clash w/ versions
> from
> > > upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded
> client.
> > > We need
> > > to message downstreamers that they should use it going forward.
> > > This will
> > > help going forward but it will not inoculate our internals nor
> an
> > > existing
> > > context where we'd like to be a compatible drop-in.
> > >
> > > We could try hackery filtering transitive includes up in poms
> for
> > > each
> > > version of hadoop/spark that we support but in the end, its a
> > > bunch of
> > > effort, hard to test, and we are unable to dictate the
> CLASSPATH
> > > order in
> > > all situations.
> > >
> > > We could try some shading voodoo inline w/ build. Because
> shading
> > > is a
> > > post-package step and because we are modularized and shading
> > > includes the
> > > shaded classes in the artifact produced, we'd end up w/
> multiple
> > > copies of
> > > guava/netty/etc. classes, an instance per module that makes a
> > > reference.
> > >
> > > Lets do Sean's idea of a pre-build step where we package and
> > > relocate
> > > ('shade') critical dependencies (Going by the thread above,
> Ram,
> > > Anoop, and
> > > Andy seems good w/ general idea).
> > >
> > > In implementation, we (The HBase PMC) would ask for a new repo
> > > [1]. In here
> > > we'd create a new mvn project. This project would produce a
> > single
> > > artifact
> > > (jar) called hbase-dependencies or hbase-3rdparty or
> > > hbase-shaded-3rdparty
> > > libs. In it would be relocated core libs such as guava and
> netty
> > > (and maybe
> > > protobuf). We'd publish this artifact and then have hbase
> depend
> > > on it
> > > changing all references to point at the relocation: e.g. rather
> > > than import
> > > com.google.common.collect.Maps, we'd import
> > > org.apache.hadoop.hbase.com.google.common.collect.Maps.
> > >
> > > We (The HBase PMC) will have to make releases of this new
> > artifact
> > > and vote
> > > on them. I think it will be a relatively rare event.
> > >
> > > I'd be up for doing the first cut if folks are game.
> > >
> > > St.Ack
> > >
> > >
> > > 1. URL via Sean but for committers to view only:
> > > https://reporeq.apache.org/
> > >
> > > On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
> > > ramkrishna.s.vasude...@gmail.com> wrote:
> > >
> > > > +1 for Sean's ideas. Bundling all the dependent libraries and
> > > shading them
> > > > into one jar and HBase referring to it makes sense and should
> > > avoid some of
> > > > the pain in terms of IDE usage. Stack's doc clearly talks
> about
> > > the IDE
> > > > issues that we may get after this protobuf shading goes in.
> It
> > > may be
> > > > diffic

Re: [DISCUSS] More Shading

2017-04-11 Thread Stack
On Tue, Apr 11, 2017 at 10:23 AM, York, Zach  wrote:

> Should we allow dependent projects (such as Phoenix) to weigh in on this
> issue since they are likely going to be the ones that benefit/are effected?
>
> I dumped a pointer to here into dev@phoenix.
St.Ack



> On 4/11/17, 10:17 AM, "York, Zach"  wrote:
>
> +1 (non-binding)
>
> This sounds like a good idea to me!
>
> Zach
>
> On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" <
> saint@gmail.com on behalf of st...@duboce.net> wrote:
>
> Let me revive this thread.
>
> Recall, we are stuck on old or particular versions of critical
> libs. We are
> unable to update because our versions will clash w/ versions from
> upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client.
> We need
> to message downstreamers that they should use it going forward.
> This will
> help going forward but it will not inoculate our internals nor an
> existing
> context where we'd like to be a compatible drop-in.
>
> We could try hackery filtering transitive includes up in poms for
> each
> version of hadoop/spark that we support but in the end, its a
> bunch of
> effort, hard to test, and we are unable to dictate the CLASSPATH
> order in
> all situations.
>
> We could try some shading voodoo inline w/ build. Because shading
> is a
> post-package step and because we are modularized and shading
> includes the
> shaded classes in the artifact produced, we'd end up w/ multiple
> copies of
> guava/netty/etc. classes, an instance per module that makes a
> reference.
>
> Lets do Sean's idea of a pre-build step where we package and
> relocate
> ('shade') critical dependencies (Going by the thread above, Ram,
> Anoop, and
> Andy seems good w/ general idea).
>
> In implementation, we (The HBase PMC) would ask for a new repo
> [1]. In here
> we'd create a new mvn project. This project would produce a single
> artifact
> (jar) called hbase-dependencies or hbase-3rdparty or
> hbase-shaded-3rdparty
> libs. In it would be relocated core libs such as guava and netty
> (and maybe
> protobuf). We'd publish this artifact and then have hbase depend
> on it
> changing all references to point at the relocation: e.g. rather
> than import
> com.google.common.collect.Maps, we'd import
> org.apache.hadoop.hbase.com.google.common.collect.Maps.
>
> We (The HBase PMC) will have to make releases of this new artifact
> and vote
> on them. I think it will be a relatively rare event.
>
> I'd be up for doing the first cut if folks are game.
>
> St.Ack
>
>
> 1. URL via Sean but for committers to view only:
> https://reporeq.apache.org/
>
> On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
> ramkrishna.s.vasude...@gmail.com> wrote:
>
> > +1 for Sean's ideas. Bundling all the dependent libraries and
> shading them
> > into one jar and HBase referring to it makes sense and should
> avoid some of
> > the pain in terms of IDE usage. Stack's doc clearly talks about
> the IDE
> > issues that we may get after this protobuf shading goes in. It
> may be
> > difficult for new comers and those who don't know this
> background of why it
> > has to be like that.
> >
> > Regards
> > Ram
> >
> > On Sun, Oct 2, 2016 at 10:51 AM, Stack  wrote:
> >
> > > On Sat, Oct 1, 2016 at 6:32 PM, Jerry He 
> wrote:
> > >
> > > > How is the proposed going to impact the existing
> shaded-client and
> > > > shaded-server modules, making them unnecessary and go away?
> > > >
> > >
> > > No. We still need the blanket shading of hbase client and
> server.
> > >
> > > This effort is about our internals. We have a mess of other
> components
> > all
> > > up inside us such as HDFS, etc., each with their own sets of
> dependencies
> > > many of which we have in common. This project t is about
> making it so we
> > > can upgrade at a rate independent of when our upstreamers
> choose to
> > change.
> > >
> > >
> > > > It doesn't seem so.  These modules are supposed to shade
> HBase and
> > > upstream
> > > > from downstream users.
> > > >
> > >
> > > Agree.
> > >
> > > Thanks for drawing out the difference between these two
> shading efforts,
> > >
> > > St.Ack
> > >
> > >
> > >
> > > > Thanks.
> > > >
> > > > Jerry
> > > >
> > > > On Sat, Oct 1, 2016 at 2:33 PM, Andrew Purtell <
> > andrew.purt...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > 

[jira] [Created] (HBASE-17905) [hbase-spark] bulkload does not work when table not exist

2017-04-11 Thread Yi Liang (JIRA)
Yi Liang created HBASE-17905:


 Summary: [hbase-spark]  bulkload does not work when table not exist
 Key: HBASE-17905
 URL: https://issues.apache.org/jira/browse/HBASE-17905
 Project: HBase
  Issue Type: Bug
Reporter: Yi Liang
Assignee: Yi Liang


when using HBase-Spark bulkload api, an argument of tablename is needed, the 
bulkload can run successfully only if  table exist in HBase.  If table not 
exist, the bulkload can not run successfully and it even do not report any 
errors or throw exception. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Failure: HBase Generate Website

2017-04-11 Thread Apache Jenkins Server
Build status: Failure

The HBase website has not been updated to incorporate HBase commit 
${HBASE_GIT_SHA}.

See https://builds.apache.org/job/hbase_generate_website/964/console




[jira] [Resolved] (HBASE-8220) can we record the count opened HTable for HTablePool

2017-04-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-8220.
---
Resolution: Won't Fix

> can we record the count opened HTable for HTablePool
> 
>
> Key: HBASE-8220
> URL: https://issues.apache.org/jira/browse/HBASE-8220
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Jianwei Cui
> Attachments: 8220-trunk-v1.txt, 8220-trunk-v2.txt, 
> 8220-trunk-v3-reattached.txt, 8220-trunk-v3.txt, 8220-trunk-v4.txt, 
> HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt, HBASE-8220-0.94.3.txt-v2, 
> HBASE-8220-0.94.3-v2.txt, HBASE-8220-0.94.3-v3.txt, HBASE-8220-0.94.3-v4.txt, 
> HBASE-8220-0.94.3-v5.txt
>
>
> In HTablePool, we have a method getCurrentPoolSize(...) to get how many 
> opened HTable has been pooled. However, we don't know ConcurrentOpenedHTable 
> which means the count of HTable get from HTablePool.getTable(...) and don't 
> return to HTablePool by PooledTable.close(). The ConcurrentOpenedHTable may 
> be meaningful because it indicates how many HTables should be opened for the 
> application which may help us set the appropriate MaxSize of HTablePool. 
> Therefore, we can and a ConcurrentOpenedHTable as a counter in HTablePool.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-10396) The constructor of HBaseAdmin may close the shared HConnection

2017-04-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-10396.

Resolution: Won't Fix

0.94 is EOL. Reopen it if objection on mailing list.

> The constructor of HBaseAdmin may close the shared HConnection 
> ---
>
> Key: HBASE-10396
> URL: https://issues.apache.org/jira/browse/HBASE-10396
> Project: HBase
>  Issue Type: Bug
>  Components: Admin, Client
>Affects Versions: 0.94.16
>Reporter: Jianwei Cui
> Attachments: HBASE-10396-0.94-v1.patch, HBASE-10396-0.94-v2.patch
>
>
> HBaseAdmin has the constructor:
> {code}
>   public HBaseAdmin(Configuration c)
>   throws MasterNotRunningException, ZooKeeperConnectionException {
> this.conf = HBaseConfiguration.create(c);
> this.connection = HConnectionManager.getConnection(this.conf);
> ...
> {code}
> As shown in above code, HBaseAdmin will get a cached HConnection or create a 
> new HConnection and use this HConnection to connect to Master. Then, 
> HBaseAdmin will delete the HConnection when connecting to master fail as 
> follows:
> {code}
> while ( true ){
>   try {
> this.connection.getMaster();
> return;
>   } catch (MasterNotRunningException mnre) {
> HConnectionManager.deleteStaleConnection(this.connection);
> this.connection = HConnectionManager.getConnection(this.conf);
>   }
> {code} 
> The above code will invoke HConnectionManager#deleteStaleConnection to delete 
> the HConnection from global HConnection cache. The risk is that the deleted 
> HConnection might be sharing by other threads, such as HTable or HTablePool. 
> Then, these threads which sharing the deleted HConnection will get closed 
> HConnection exception:
> {code}
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa
>  closed
> {code}
> If users use HTablePool, the situation will become worse because closing 
> HTable will only return HTable to HTablePool which won't reduce the reference 
> count of the closed HConnection. Then, the closed HConnection will always be 
> used before clearing HTablePool. In 0.94, some modules such as Rest server 
> are using HTablePool, therefore may suffer from this problem. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] More Shading

2017-04-11 Thread Nick Dimiduk
> This effort is about our internals. We have a mess of other components all
> up inside us such as HDFS, etc., each with their own sets of dependencies
> many of which we have in common. This project t is about making it so we
> can upgrade at a rate independent of when our upstreamers choose to
change.

Pardon as I try to get a handle on the intention behind this thread.

If the above quote is true, then I think what we want is a set of shaded
Hadoop client libs that we can depend on so as to not get all the
transitive deps. Hadoop doesn't provide it, but we could do so ourselves
with (yet another) module in our project. Assuming, that is, the upstream
client interfaces are well defined and don't leak stuff we care about. It
also creates a terrible nightmare for anyone downstream of us who
repackages HBase. The whole thing is extremely error-prone, because there's
not very good tooling for this. Realistically, we end up with a combination
of the enforcer plugin and maybe our own custom plugin to ensure clean
transitive dependencies...

I guess the suggestion of the external repo containing our shaded fork of
everything we depend on allows us to continue to compile, run on Hadoop's
transitive dependency list w.o actually using any of it, I have that right?
How would we version this thing?

Between these two choices, I prefer the former as a "more correct"
solution, but it depends entirely on how clean of a shaded hadoop we can
reliably produce inline our build.

On Tue, Apr 11, 2017 at 1:03 PM, Stack  wrote:

> On Tue, Apr 11, 2017 at 10:23 AM, York, Zach  wrote:
>
> > Should we allow dependent projects (such as Phoenix) to weigh in on this
> > issue since they are likely going to be the ones that benefit/are
> effected?
> >
> > I dumped a pointer to here into dev@phoenix.
> St.Ack
>
>
>
> > On 4/11/17, 10:17 AM, "York, Zach"  wrote:
> >
> > +1 (non-binding)
> >
> > This sounds like a good idea to me!
> >
> > Zach
> >
> > On 4/11/17, 9:48 AM, "saint@gmail.com on behalf of Stack" <
> > saint@gmail.com on behalf of st...@duboce.net> wrote:
> >
> > Let me revive this thread.
> >
> > Recall, we are stuck on old or particular versions of critical
> > libs. We are
> > unable to update because our versions will clash w/ versions from
> > upstreamer hadoop2.7/2.8/3.0/spark, etc. We have a shaded client.
> > We need
> > to message downstreamers that they should use it going forward.
> > This will
> > help going forward but it will not inoculate our internals nor an
> > existing
> > context where we'd like to be a compatible drop-in.
> >
> > We could try hackery filtering transitive includes up in poms for
> > each
> > version of hadoop/spark that we support but in the end, its a
> > bunch of
> > effort, hard to test, and we are unable to dictate the CLASSPATH
> > order in
> > all situations.
> >
> > We could try some shading voodoo inline w/ build. Because shading
> > is a
> > post-package step and because we are modularized and shading
> > includes the
> > shaded classes in the artifact produced, we'd end up w/ multiple
> > copies of
> > guava/netty/etc. classes, an instance per module that makes a
> > reference.
> >
> > Lets do Sean's idea of a pre-build step where we package and
> > relocate
> > ('shade') critical dependencies (Going by the thread above, Ram,
> > Anoop, and
> > Andy seems good w/ general idea).
> >
> > In implementation, we (The HBase PMC) would ask for a new repo
> > [1]. In here
> > we'd create a new mvn project. This project would produce a
> single
> > artifact
> > (jar) called hbase-dependencies or hbase-3rdparty or
> > hbase-shaded-3rdparty
> > libs. In it would be relocated core libs such as guava and netty
> > (and maybe
> > protobuf). We'd publish this artifact and then have hbase depend
> > on it
> > changing all references to point at the relocation: e.g. rather
> > than import
> > com.google.common.collect.Maps, we'd import
> > org.apache.hadoop.hbase.com.google.common.collect.Maps.
> >
> > We (The HBase PMC) will have to make releases of this new
> artifact
> > and vote
> > on them. I think it will be a relatively rare event.
> >
> > I'd be up for doing the first cut if folks are game.
> >
> > St.Ack
> >
> >
> > 1. URL via Sean but for committers to view only:
> > https://reporeq.apache.org/
> >
> > On Sun, Oct 2, 2016 at 10:29 PM, ramkrishna vasudevan <
> > ramkrishna.s.vasude...@gmail.com> wrote:
> >
> > > +1 for Sean's ideas. Bundling all the dependent libraries and
> > shading them
> > > into one jar and HBase referring to it makes sense and should
> > avoid some of
> > > the pain in terms of IDE usage. Stack's doc clearly talks about
> > the IDE
> > > issues that