Re: Backup Implementation (WAS => Re: [DISCUSSION] MR jobs started by Master or RS)

2016-09-26 Thread Devaraj Das
Vlad, thinking about it a little more, since the master is not orchestrating 
the backup, let's make it dead simple as a first pass. I think we should do the 
following: All or most of the Backup/Restore operations (especially the MR job 
spawns) should be moved to the client. Ignore security for the moment - let's 
live with what we have as the current "limitation" for tools that need HDFS 
access - they need to run as hbase (or whatever the hbase daemons runs as). 
Consistency/cleanup needs to be handled as well as much as possible - if the 
client fails after initiating the backup/restore, who restores consistency in 
the hbase:backup table, or cleans up the half copied data in the hdfs dirs, etc.
In the future, if someone needs to support self-service operations (any user 
can take a backup/restore his/her tables), we can discuss the "backup service" 
or something else.
Folks - Stack / Andrew / Matteo / others, please speak up if you disagree with 
the above. Would like to get over this merge-to-master hump obviously.


From: Vladimir Rodionov 
Sent: Monday, September 26, 2016 11:48 AM
To: dev@hbase.apache.org
Subject: Re: Backup Implementation (WAS => Re: [DISCUSSION] MR jobs started by 
Master or RS)

Ok, we had internal discussion and this is what we are suggesting now:

1. We will create separate module (hbase-backup) and move server-side code
there.
2. Master and RS will be MR and backup free.
3. The code from Master will be moved into standalone service
(BackupService) for procedure orchestration,
 operation resume/abort and SECURITY. It means - one additional
(process) similar to REST/Thrift server will be required
to operate backup.

I would like to note that separate process running under hbase super user
is required to implement security properly in a multi-tenant environment,
otherwise, only hbase super user will be allowed to operate backups

Please let us know, what do you think, HBase people :?

-Vlad



On Sat, Sep 24, 2016 at 2:49 PM, Stack  wrote:

> On Sat, Sep 24, 2016 at 9:58 AM, Andrew Purtell 
> wrote:
>
> > At branch merge voting time now more eyes are getting on the design
> issues
> > with dissenting opinion emerging. This is the branch merge process
> working
> > as our community has designed it. Because this is the first full project
> > review of the code and implementation I think we all have to be
> flexible. I
> > see the community as trying to narrow the technical objection at issue to
> > the smallest possible scope. It's simple: don't call out to an external
> > execution framework we don't own from core master (and by extension
> > regionserver) code. We had this objection before to a proposed external
> > compaction implementation for
> > MOB so should not come as a surprise. Please let me know if I have
> > misstated this.
> >
> >
> The above is my understanding also.
>
>
> > This would seem to require a modest refactor of coordination to move
> > invocation of MR code out from any core code path. To restate what I
> think
> > is an emerging recommendation: Move cross HBase and MR coordination to a
> > separate tool. This tool can ask the master to invoke procedures on the
> > HBase side that do first mile export and last mile restore. (Internally
> the
> > tool can also use the procedure framework for state durability, perhaps,
> > just a thought.) Then the tool can further drive the things done with MR
> > like shipping data off cluster or moving remote data in place and
> preparing
> > it for import. These activities do not need procedure coordination and
> > involvement of the HBase master. Only the first and last mile of the
> > process needs atomicity within the HBase deploy. Please let me know if I
> > have misstated this.
> >
> >
> > Above is my understanding of our recommendation.
>
> St.Ack
>
>
>
> > > On Sep 24, 2016, at 8:17 AM, Ted Yu  wrote:
> > >
> > > bq. procedure gives you a retry mechanism on failure
> > >
> > > We do need this mechanism. Take a look at the multi-step
> > > in FullTableBackupProcedure, etc.
> > >
> > > bq. let the user export it later when he wants
> > >
> > > This would make supporting security more complex (user A shouldn't be
> > > exporting user B's backup). And it is not user friendly - at the time
> > > backup request is issued, the following is specified:
> > >
> > > +  + " BACKUP_ROOT The full root path to store the backup
> > > image,\n"
> > > +  + " the prefix can be hdfs, webhdfs or
> gpfs\n"
> > >
> > > Backup root is an integral part of backup manifest.
> > >
> > > Cheers
> > >
> > >
> > > On Sat, Sep 24, 2016 at 7:59 AM, Matteo Bertozzi <
> > theo.berto...@gmail.com>
> > > wrote:
> > >
> > >>> On Sat, Sep 24, 2016 at 7:19 AM, Ted Yu  wrote:
> > >>>
> > >>> Ideally the export should have one job running which does the retry
> 

[jira] [Created] (HBASE-16715) Signing keys could not be imported

2016-09-26 Thread Francis Chuang (JIRA)
Francis Chuang created HBASE-16715:
--

 Summary: Signing keys could not be imported
 Key: HBASE-16715
 URL: https://issues.apache.org/jira/browse/HBASE-16715
 Project: HBase
  Issue Type: Bug
Reporter: Francis Chuang


I am trying to import the signing keys to verify downloaded hbase releases, but 
it appears to fail:

$ wget -O /tmp/KEYS https://www-us.apache.org/dist/hbase/KEYS
Connecting to www-us.apache.org (140.211.11.105:443)
KEYS 100% |***| 50537   0:00:00 ETA
$ gpg --import /tmp/KEYS
gpg: directory '/root/.gnupg' created
gpg: new configuration file '/root/.gnupg/dirmngr.conf' created
gpg: new configuration file '/root/.gnupg/gpg.conf' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 945D66AF: public key "Jean-Daniel Cryans (ASF key) 
" imported
gpg: key D34B98D6: public key "Michael Stack " imported
gpg: key 30CD0996: public key "Michael Stack " imported
gpg: key AEC77EAF: public key "Todd Lipcon " imported
gpg: key F48B08A4: public key "Ted Yu (Apache Public Key) 
" imported
gpg: key 867B57B8: public key "Ramkrishna S Vasudevan (for code checkin) 
" imported
gpg: key 7CA45750: public key "Lars Hofhansl (CODE SIGNING KEY) 
" imported
gpg: key A1AC25A9: public key "Lars Hofhansl (CODE SIGNING KEY) 
" imported
gpg: key C7CFE328: public key "Lars Hofhansl (CODE SIGNING KEY) 
" imported
gpg: key E964B5FF: public key "Enis Soztutar (CODE SIGNING KEY) 
" imported
gpg: key 0D80DB7C: public key "Sean Busbey (CODE SIGNING KEY) 
" imported
gpg: key 8644EEB6: public key "Nick Dimiduk " imported
gpg: invalid radix64 character 3A skipped
gpg: CRC error; E1B6C3 - DFECFB
gpg: [don't know]: invalid packet (ctb=55)
gpg: read_block: read error: Invalid packet
gpg: import from '/tmp/KEYS' failed: Invalid keyring
gpg: Total number processed: 12
gpg:   imported: 12
gpg: no ultimately trusted keys found



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16714) Procedure V2 - use base class to remove duplicate set up test code in table DDL procedures

2016-09-26 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-16714:
--

 Summary: Procedure V2 - use base class to remove duplicate set up 
test code in table DDL procedures 
 Key: HBASE-16714
 URL: https://issues.apache.org/jira/browse/HBASE-16714
 Project: HBase
  Issue Type: Improvement
  Components: proc-v2, test
Affects Versions: 2.0.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang


All table DDL procedure tests has the same set up.  To avoid duplicate code and 
help maintain the existing test, we should move the same set up in a base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start

2016-09-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-11354:


We came across this in an 0.98 install, let's apply just to 0.98

> HConnectionImplementation#DelayedClosing does not start
> ---
>
> Key: HBASE-11354
> URL: https://issues.apache.org/jira/browse/HBASE-11354
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
>Priority: Minor
> Attachments: HBASE_11354 (1).patch, HBASE_11354.patch, 
> HBASE_11354.patch, HBASE_11354.patch
>
>
> The method "createAndStart" in class DelayedClosing only creates a instance, 
> but forgets to start it. So thread delayedClosing is not running all the time.
> ConnectionManager#1623
> {code}
>   static DelayedClosing createAndStart(HConnectionImplementation hci){
> Stoppable stoppable = new Stoppable() {
>   private volatile boolean isStopped = false;
>   @Override public void stop(String why) { isStopped = true;}
>   @Override public boolean isStopped() {return isStopped;}
> };
> return new DelayedClosing(hci, stoppable);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16713) Bring back connection caching as a client API

2016-09-26 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16713:
-

 Summary: Bring back connection caching as a client API
 Key: HBASE-16713
 URL: https://issues.apache.org/jira/browse/HBASE-16713
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Enis Soztutar
 Fix For: 2.0.0, 1.4.0


Connection.getConnection() is removed in master for good reasons. The 
connection lifecycle should always be explicit. We have replaced some of the 
functionality with ConnectionCache for rest and thrift servers internally, but 
it is not exposed to clients. 

Turns out our friends doing the hbase-spark connector work needs a similar 
connection caching behavior that we have in rest and thrift server. At a higher 
level we want: 
 - Spark executors should be able to run short living hbase tasks with low 
latency 
 - Short living tasks should be able to share the same connection, and should 
not pay the price of instantiating the cluster connection (which means zk 
connection, meta cache, 200+ threads, etc)
 - Connections to the cluster should be closed if it is not used for some time. 
Spark executors are used for other tasks as well. 
 - Spark jobs may be launched with different configuration objects, possibly 
connecting to different clusters between different jobs. 
 - Although not a direct requirement for spark, different users should not 
share the same connection object. 

Looking at the old code that we have in branch-1 for {{ConnectionManager}}, 
managed connections and the code in ConnectionCache, I think we should do a 
first-class client level API called ConnectionCache which will be a hybrid 
between ConnectionCache and old ConnectionManager. The lifecycle of the 
ConnectionCache is still explicit, so I think API-design-wise, this will fit 
into the current model. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16712) fix hadoop-3.0 profile mvn install

2016-09-26 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-16712:
--

 Summary: fix hadoop-3.0 profile mvn install
 Key: HBASE-16712
 URL: https://issues.apache.org/jira/browse/HBASE-16712
 Project: HBase
  Issue Type: Bug
  Components: build, hadoop3
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 2.0.0


After the compile is fixed, mvn install fails due to transitive dependencies 
coming from hadoop3. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16711) Fix hadoop-3.0 profile compile

2016-09-26 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-16711:
--

 Summary: Fix hadoop-3.0 profile compile
 Key: HBASE-16711
 URL: https://issues.apache.org/jira/browse/HBASE-16711
 Project: HBase
  Issue Type: Bug
  Components: hadoop3, build
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 2.0.0


The -Dhadoop.profile=3.0 build is failing currently due to code deprecated in 
hadoop2 and removed in hadoop3.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Successful: hbase.apache.org HTML Checker

2016-09-26 Thread Apache Jenkins Server
Successful

If successful, the HTML and link-checking report for http://hbase.apache.org is 
available at 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/62/artifact/link_report/index.html.

If failed, see 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/62/console.

[jira] [Resolved] (HBASE-16694) Reduce garbage for onDiskChecksum in HFileBlock

2016-09-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-16694.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 0.98.23
   1.4.0
   2.0.0

> Reduce garbage for onDiskChecksum in HFileBlock
> ---
>
> Key: HBASE-16694
> URL: https://issues.apache.org/jira/browse/HBASE-16694
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0..
>Reporter: binlijin
>Assignee: binlijin
>Priority: Minor
> Fix For: 2.0.0, 1.4.0, 0.98.23
>
> Attachments: HBASE-16694-master.patch
>
>
> Current when finish a HFileBlock will create a new byte[] for onDiskChecksum, 
> we can reuse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Need hand-holding for 1.1.7 release management

2016-09-26 Thread Misty Stanley-Jones
Thanks, I have followed those steps and now I am attempting to build the
RC. Let's see how it goes.

On Tue, Sep 27, 2016, at 05:20 AM, Sean Busbey wrote:
> My ~/.m2/settings.xml file looks like the example from the "pushing
> stuff to maven" guide[1]:
> 
> http://maven.apache.org/SETTINGS/1.0.0;
>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>   xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
>   http://maven.apache.org/xsd/settings-1.0.0.xsd;>
>   
> 
> 
> 
>   apache.releases.https
>   busbey
>   
>   
>   
> 
>   
> 
> 
> I don't bother including a section for publishing SNAPSHOTs because I
> don't publish SNAPSHOTs.
> 
> Note that you'll need to follow the maven password encryption guide to
> avoid storing your plaintext ASF creds[2] and you should use Maven
> 3.2.1+ so that you don't need to put the password(s) on the command
> invocation.
> 
> 
> [1]: http://www.apache.org/dev/publishing-maven-artifacts.html#dev-env
> [2]: http://maven.apache.org/guides/mini/guide-encryption.html
> 
> On Mon, Sep 26, 2016 at 11:33 AM, Misty Stanley-Jones 
> wrote:
> > Is anyone around who can help me with some of the release management
> > steps? I think I have updated the KEYS file correctly, but now I am
> > stuck on what my ~/.m2/settings.xml should look like. I think I'm most
> > of the way there...
> 
> 
> 
> -- 
> busbey


Re: Need hand-holding for 1.1.7 release management

2016-09-26 Thread Sean Busbey
My ~/.m2/settings.xml file looks like the example from the "pushing
stuff to maven" guide[1]:

http://maven.apache.org/SETTINGS/1.0.0;
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
  http://maven.apache.org/xsd/settings-1.0.0.xsd;>
  



  apache.releases.https
  busbey
  
  
  

  


I don't bother including a section for publishing SNAPSHOTs because I
don't publish SNAPSHOTs.

Note that you'll need to follow the maven password encryption guide to
avoid storing your plaintext ASF creds[2] and you should use Maven
3.2.1+ so that you don't need to put the password(s) on the command
invocation.


[1]: http://www.apache.org/dev/publishing-maven-artifacts.html#dev-env
[2]: http://maven.apache.org/guides/mini/guide-encryption.html

On Mon, Sep 26, 2016 at 11:33 AM, Misty Stanley-Jones  wrote:
> Is anyone around who can help me with some of the release management
> steps? I think I have updated the KEYS file correctly, but now I am
> stuck on what my ~/.m2/settings.xml should look like. I think I'm most
> of the way there...



-- 
busbey


Re: Backup Implementation (WAS => Re: [DISCUSSION] MR jobs started by Master or RS)

2016-09-26 Thread Vladimir Rodionov
Ok, we had internal discussion and this is what we are suggesting now:

1. We will create separate module (hbase-backup) and move server-side code
there.
2. Master and RS will be MR and backup free.
3. The code from Master will be moved into standalone service
(BackupService) for procedure orchestration,
 operation resume/abort and SECURITY. It means - one additional
(process) similar to REST/Thrift server will be required
to operate backup.

I would like to note that separate process running under hbase super user
is required to implement security properly in a multi-tenant environment,
otherwise, only hbase super user will be allowed to operate backups

Please let us know, what do you think, HBase people :?

-Vlad



On Sat, Sep 24, 2016 at 2:49 PM, Stack  wrote:

> On Sat, Sep 24, 2016 at 9:58 AM, Andrew Purtell 
> wrote:
>
> > At branch merge voting time now more eyes are getting on the design
> issues
> > with dissenting opinion emerging. This is the branch merge process
> working
> > as our community has designed it. Because this is the first full project
> > review of the code and implementation I think we all have to be
> flexible. I
> > see the community as trying to narrow the technical objection at issue to
> > the smallest possible scope. It's simple: don't call out to an external
> > execution framework we don't own from core master (and by extension
> > regionserver) code. We had this objection before to a proposed external
> > compaction implementation for
> > MOB so should not come as a surprise. Please let me know if I have
> > misstated this.
> >
> >
> The above is my understanding also.
>
>
> > This would seem to require a modest refactor of coordination to move
> > invocation of MR code out from any core code path. To restate what I
> think
> > is an emerging recommendation: Move cross HBase and MR coordination to a
> > separate tool. This tool can ask the master to invoke procedures on the
> > HBase side that do first mile export and last mile restore. (Internally
> the
> > tool can also use the procedure framework for state durability, perhaps,
> > just a thought.) Then the tool can further drive the things done with MR
> > like shipping data off cluster or moving remote data in place and
> preparing
> > it for import. These activities do not need procedure coordination and
> > involvement of the HBase master. Only the first and last mile of the
> > process needs atomicity within the HBase deploy. Please let me know if I
> > have misstated this.
> >
> >
> > Above is my understanding of our recommendation.
>
> St.Ack
>
>
>
> > > On Sep 24, 2016, at 8:17 AM, Ted Yu  wrote:
> > >
> > > bq. procedure gives you a retry mechanism on failure
> > >
> > > We do need this mechanism. Take a look at the multi-step
> > > in FullTableBackupProcedure, etc.
> > >
> > > bq. let the user export it later when he wants
> > >
> > > This would make supporting security more complex (user A shouldn't be
> > > exporting user B's backup). And it is not user friendly - at the time
> > > backup request is issued, the following is specified:
> > >
> > > +  + " BACKUP_ROOT The full root path to store the backup
> > > image,\n"
> > > +  + " the prefix can be hdfs, webhdfs or
> gpfs\n"
> > >
> > > Backup root is an integral part of backup manifest.
> > >
> > > Cheers
> > >
> > >
> > > On Sat, Sep 24, 2016 at 7:59 AM, Matteo Bertozzi <
> > theo.berto...@gmail.com>
> > > wrote:
> > >
> > >>> On Sat, Sep 24, 2016 at 7:19 AM, Ted Yu  wrote:
> > >>>
> > >>> Ideally the export should have one job running which does the retry
> (on
> > >>> failed partition) itself.
> > >>>
> > >>
> > >> procedure gives you a retry mechanism on failure. if you don't use
> that,
> > >> than you don't need procedure.
> > >> if you want you can start a procedure executor in a non master process
> > (the
> > >> hbase-procedure is a separate package and does not depend on master).
> > but
> > >> again, export seems a case where you don't need procedure.
> > >>
> > >> like snapshot, the logic may just be: ask the master to take a backup.
> > and
> > >> let the user export it later when he wants. so you avoid having a MR
> job
> > >> started by the master since people does not seems to like it.
> > >>
> > >> for restore (I think that is where you use the MR splitter) you can
> > >> probably just have a backup ready (already splitted). there is
> already a
> > >> jira that should do that HBASE-14135. instead of doing the operation
> of
> > >> split/merge on restore. you consolidate the backup "offline" (mr job
> > >> started by the user) and then ask to restore the backup.
> > >>
> > >>
> > >>>
> > >>> On Sat, Sep 24, 2016 at 7:04 AM, Matteo Bertozzi <
> > >> theo.berto...@gmail.com>
> > >>> wrote:
> > >>>
> >  as far as I understand the code, you don't need procedure for the
> > >> export
> >  itself.
> >  

[jira] [Created] (HBASE-16710) Add ZStandard Codec to Compression.java

2016-09-26 Thread churro morales (JIRA)
churro morales created HBASE-16710:
--

 Summary: Add ZStandard Codec to Compression.java
 Key: HBASE-16710
 URL: https://issues.apache.org/jira/browse/HBASE-16710
 Project: HBase
  Issue Type: Task
Affects Versions: 2.0.0
Reporter: churro morales
Assignee: churro morales
Priority: Minor


HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder to 
ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16709) Drop hadoop-1.1 profile in pom.xml for master branch

2016-09-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-16709.

Resolution: Duplicate

> Drop hadoop-1.1 profile in pom.xml for master branch
> 
>
> Key: HBASE-16709
> URL: https://issues.apache.org/jira/browse/HBASE-16709
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Currently the following modules have hadoop-1.1 profile in pom.xml:
> {code}
>   hadoop-1.1
> ./hbase-client/pom.xml
>   hadoop-1.1
> ./hbase-common/pom.xml
>  hadoop-1.1
> ./hbase-examples/pom.xml
>   hadoop-1.1
> ./hbase-external-blockcache/pom.xml
>   hadoop-1.1
> ./hbase-it/pom.xml
>   hadoop-1.1
> ./hbase-prefix-tree/pom.xml
>   hadoop-1.1
> ./hbase-procedure/pom.xml
>   hadoop-1.1
> ./hbase-server/pom.xml
>   hadoop-1.1
> ./hbase-shell/pom.xml
> hadoop-1.1
> ./hbase-testing-util/pom.xml
>   hadoop-1.1
> ./hbase-thrift/pom.xml
> {code}
> hadoop-1.1 profile can be dropped in the above pom.xml for hbase 2.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14776) Rewrite smart-apply-patch.sh to use 'git am' or 'git apply' rather than 'patch'

2016-09-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-14776.
-
   Resolution: Won't Fix
 Assignee: (was: Sean Busbey)
Fix Version/s: (was: 2.0.0)

obviated by our move to yetus.

> Rewrite smart-apply-patch.sh to use 'git am' or 'git apply' rather than 
> 'patch'
> ---
>
> Key: HBASE-14776
> URL: https://issues.apache.org/jira/browse/HBASE-14776
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.0
>Reporter: Misty Stanley-Jones
> Attachments: HBASE-14776.patch
>
>
> We require patches to be created using 'git format-patch' or 'git diff', so 
> patches should be tested using 'git am' or 'git apply', not 'patch -pX'. This 
> causes false errors in the Jenkins patch tester.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16019) Cut HBase 1.2.2 release

2016-09-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-16019.
-
Resolution: Fixed

this got finished some time ago (we've even had a 1.2.3 since). not sure what I 
was waiting for. maybe the announce email?

> Cut HBase 1.2.2 release
> ---
>
> Key: HBASE-16019
> URL: https://issues.apache.org/jira/browse/HBASE-16019
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Need hand-holding for 1.1.7 release management

2016-09-26 Thread Misty Stanley-Jones
Is anyone around who can help me with some of the release management
steps? I think I have updated the KEYS file correctly, but now I am
stuck on what my ~/.m2/settings.xml should look like. I think I'm most
of the way there...


[jira] [Created] (HBASE-16709) Drop hadoop-1.1 profile in pom.xml for master branch

2016-09-26 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16709:
--

 Summary: Drop hadoop-1.1 profile in pom.xml for master branch
 Key: HBASE-16709
 URL: https://issues.apache.org/jira/browse/HBASE-16709
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Currently the following modules have hadoop-1.1 profile in pom.xml:
{code}
  hadoop-1.1
./hbase-client/pom.xml
  hadoop-1.1
./hbase-common/pom.xml
 hadoop-1.1
./hbase-examples/pom.xml
  hadoop-1.1
./hbase-external-blockcache/pom.xml
  hadoop-1.1
./hbase-it/pom.xml
  hadoop-1.1
./hbase-prefix-tree/pom.xml
  hadoop-1.1
./hbase-procedure/pom.xml
  hadoop-1.1
./hbase-server/pom.xml
  hadoop-1.1
./hbase-shell/pom.xml
hadoop-1.1
./hbase-testing-util/pom.xml
  hadoop-1.1
./hbase-thrift/pom.xml
{code}
hadoop-1.1 profile can be dropped in the above pom.xml for hbase 2.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-09-26 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-16708:


 Summary: Expose endpoint Coprocessor name in "responseTooSlow" log 
messages
 Key: HBASE-16708
 URL: https://issues.apache.org/jira/browse/HBASE-16708
 Project: HBase
  Issue Type: Improvement
Reporter: Nick Dimiduk
 Fix For: 1.1.2


Operational diagnostics of a Phoenix install would be easier if we included 
which endpoint coprocessor was being called in this responseTooSlow WARN 
message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Successful: HBase Generate Website

2016-09-26 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. To update the live 
site, follow the instructions below. If failed, skip to the bottom of this 
email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/356/artifact/website.patch.zip
 | funzip > 5f7e642fed2e393831f630233e93bd20801ec70a.patch
  git fetch
  git checkout -b asf-site-5f7e642fed2e393831f630233e93bd20801ec70a 
origin/asf-site
  git am --whitespace=fix 5f7e642fed2e393831f630233e93bd20801ec70a.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-5f7e642fed2e393831f630233e93bd20801ec70a branch.

There are lots of spurious changes, such as timestamps and CSS styles in 
tables, so a generic git diff is not very useful. To see a list of files that 
have been added, deleted, renamed, changed type, or are otherwise interesting, 
use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using these 
commands:

  git commit --allow-empty -m "Empty commit" # to work around a current ASF 
INFRA bug
  git push origin asf-site-5f7e642fed2e393831f630233e93bd20801ec70a:asf-site
  git checkout asf-site
  git branch -D asf-site-5f7e642fed2e393831f630233e93bd20801ec70a

Changes take a couple of minutes to be propagated. You can verify whether they 
have been propagated by looking at the Last Published date at the bottom of 
http://hbase.apache.org/. It should match the date in the index.html on the 
asf-site branch in Git.

As a courtesy- reply-all to this email to let other committers know you pushed 
the site.



If failed, see https://builds.apache.org/job/hbase_generate_website/356/console

[jira] [Created] (HBASE-16707) [Umbrella] Improve throttling feature for production usage

2016-09-26 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-16707:
--

 Summary: [Umbrella] Improve throttling feature for production usage
 Key: HBASE-16707
 URL: https://issues.apache.org/jira/browse/HBASE-16707
 Project: HBase
  Issue Type: Umbrella
Reporter: Guanghao Zhang


HBASE-11598 add rpc throttling feature and did a great initial work there. We 
plan to use throttling in our production cluster and did some improvements for 
it. From the user mail list, I found that there are other users used throttling 
feature, too. I thought it is time to contribute our work to community, include:
1. Add shell cmd to start/stop throttling.
2. Add metrics for throttling request.
3. Basic UI support in master/regionserver.
4. Handle throttling exception in client.
5. Add more throttle types like DynamoDB, use read/write capacity unit to 
throttle.
6. Support soft limit, user can over consume his quota when regionserver has 
available capacity because other users not consume at the same time.
7. ... ...

I thought some of these improvements are useful. So open an umbrella issue to 
track. Suggestions and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16706) Allow users to have Custom tags on Cells

2016-09-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-16706:
--

 Summary: Allow users to have Custom tags on Cells
 Key: HBASE-16706
 URL: https://issues.apache.org/jira/browse/HBASE-16706
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


The Codec based strip of tags was done as a temp solution not to pass the 
critical system tags from server back to client. This also imposes the 
limitation that Tags can not be used by users. Tags are a system side feature 
alone. In the past there were some Qs in user@ for using custom tags. 
We should allow users to set tags on Cell and pass them while write. Also these 
custom tags must be returned back to users (Irrespective of codec and all). The 
system tags (like ACL, visibility) should not get transferred btw client and 
server. And when the client is run by a super user, we should pass all tags 
(including system tags). This way we can make sure that all tags are passed 
while replication and also tool like Export gets all tags. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)