[jira] [Created] (HBASE-11174) show backup/restore progress

2014-05-16 Thread Demai Ni (JIRA)
Demai Ni created HBASE-11174:


 Summary: show backup/restore progress
 Key: HBASE-11174
 URL: https://issues.apache.org/jira/browse/HBASE-11174
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.99.0
Reporter: Demai Ni
 Fix For: 0.99.0


h2. Feature Description
the jira is part of  
[HBASE-7912|https://issues.apache.org/jira/browse/HBASE-7912], and depend on 
full backup [HBASE-10900| https://issues.apache.org/jira/browse/HBASE-10900] 
and incremental backup [HBASE-11085| 
https://issues.apache.org/jira/browse/HBASE-11085]. for the detail layout and 
frame work, please reference to  [HBASE-10900|  
https://issues.apache.org/jira/browse/HBASE-10900].

A backup/restore operation may take a while to complete, sometimes hours. It 
will be helpful to show the estimated progress as percentage to user. The jira 
will provide such functionally 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10661) TestStochasticLoadBalancer.testRegionReplicationOnMidClusterWithRacks() is flaky

2014-05-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-10661.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks Devaraj. I've committed this to branch. 

> TestStochasticLoadBalancer.testRegionReplicationOnMidClusterWithRacks() is 
> flaky
> 
>
> Key: HBASE-10661
> URL: https://issues.apache.org/jira/browse/HBASE-10661
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10661_v1.patch, hbase-10661_v2.patch, 
> hbase-10661_v3.patch, hbase-10661_v4.patch, hbase-10661_v5.patch, 
> hbase-10661_v6.patch
>
>
> One of the tests introduced in HBASE-10351 seems to be flaky. The LB cannot 
> compute the fully assignment plan in time when there are racks and region 
> replicas for the test, so it is failing sometimes. 
> We can reduce the computation amount, and increase the LB runtime to make the 
> test stable. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11188) "Inconsistent configuration" for SchemaMetrics is always shown

2014-05-16 Thread Jean-Daniel Cryans (JIRA)
Jean-Daniel Cryans created HBASE-11188:
--

 Summary: "Inconsistent configuration" for SchemaMetrics is always 
shown
 Key: HBASE-11188
 URL: https://issues.apache.org/jira/browse/HBASE-11188
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.94.19
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.94.20
 Attachments: HBASE-11188-0.94-v2.patch, HBASE-11188-0.94.patch

Some users have been complaining about this message:

{noformat}
ERROR org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: Inconsistent 
configuration. Previous configuration for using table name in metrics: true, 
new configuration: false
{noformat}

The interesting thing is that we see it with default configurations, which made 
me think that some code path must have been passing the wrong thing. I found 
that if SchemaConfigured is passed a null Configuration in its constructor that 
it will then pass null to SchemaMetrics#configureGlobally which will interpret 
useTableName as being false:

{code}
  public static void configureGlobally(Configuration conf) {
if (conf != null) {
  final boolean useTableNameNew =
  conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
  setUseTableName(useTableNameNew);
} else {
  setUseTableName(false);
}
  }
{code}

It should be set to true since that's the new default, meaning we missed it in 
HBASE-5671.

I found one code path that passes a null configuration, StoreFile.Reader 
extends SchemaConfigured and uses the constructor that only passes a Path, so 
the Configuration is set to null.

I'm planning on just passing true instead of false, fixing the problem for 
almost everyone (those that disable this feature will get the error message). 
IMO it's not worth more efforts since it's a 0.94-only problem and it's not 
actually doing anything bad.

I'm closing both HBASE-10990 and HBASE-10946 as duplicates.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11149) Wire encryption is broken

2014-05-16 Thread Devaraj Das (JIRA)
Devaraj Das created HBASE-11149:
---

 Summary: Wire encryption is broken
 Key: HBASE-11149
 URL: https://issues.apache.org/jira/browse/HBASE-11149
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.99.0
 Attachments: 11149-1.txt

Upon some testing with the QOP configuration (hbase.rpc.protection), discovered 
that RPC doesn't work with "integrity" and "privacy" values for the 
configuration key. I was using 0.98.x for testing but I believe the issue is 
there in trunk as well (haven't checked 0.96 and 0.94).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: DISCUSS: We need a mascot, a totem HBASE-4920

2014-05-16 Thread Jean-Marc Spaggiari
My 2 ¢... Teeth mean aggressiveness. For me it gives a negative sense.  I
tend to prefer something like
http://4.bp.blogspot.com/_HBjr0PcdZW4/TIge5Uok9eI/A8Q/0AdMNtuhRaY/s400/pm006-orca7.png.

JM


2014-05-13 20:57 GMT-04:00 Stack :

> Any more comments here?  I'd like to go back to our team w/ some feedback
>  (Enis just commented up on the issue that he is not mad about the evil
> grin -- I can ask them to do something about this)
>
> St.Ack
>
>
> On Wed, May 7, 2014 at 12:04 PM, Stack  wrote:
>
> > The design team at my place of work put together a few variations on an
> > Orca with an evil grin.  I posted their combos up on
> > https://issues.apache.org/jira/browse/HBASE-4920 (see the three most
> > recent).  They can work on cleanup -- e.g. making the details look good
> at
> > a small scale and do up the various logo/image combinations -- but are
> you
> > lot good w/ this general direction?
> >
> > Enis used the following in his 1.0 slides during the release managers
> > session at hbasecon and it looked alright to me (maybe it was the big
> > screen that did it) but it needs work to make it look good at a small
> > scale:
> >
> http://depositphotos.com/2900573/stock-illustration-Killer-whale-tattoo.html(or
> >
> http://4.bp.blogspot.com/_HBjr0PcdZW4/TIge5Uok9eI/A8Q/0AdMNtuhRaY/s400/pm006-orca7.png
> )
> > Our designers could clean up this one too?
> >
> > Nkeywal voted up this one
> >
> https://issues.apache.org/jira/secure/attachment/12511412/HBase%20Orca%20Logo.jpgand
>  I can ask our designers look at this as well.
> >
> > Feedback appreciated.
> >
> > St.Ack
> > P.S. If you can't tell, I'm trying to avoid an absolute vote run over
> some
> > selection of rough images.  It will not be a comparison of apples to
> apples
> > since the images are unfinished and most are without a 'setting'.
>  Rather,
> > I'm trying to narrow the options and then have us give feedback to a
> couple
> > of ready and willing professionals who can interpret our concerns in a
> > language they are expert in (and in which we are not ... IANAGD).
> >
> >
> >
> >
> > On Wed, Mar 5, 2014 at 11:22 AM, Stack  wrote:
> >
> >> On Wed, Mar 5, 2014 at 2:18 AM, Nicolas Liochon  >wrote:
> >>
> >>> Yes, let's decide first on the animal itself (well we're done it
> seems),
> >>> and use another discussion thread for the picture.
> >>>
> >>>
> >> Agreed.
> >>
> >> We've decided on the mascot (Hurray!).  Now for the representation.
>  Will
> >> do in another thread.  I thought we could skirt this issue but you
> reviving
> >> an image I'd passed over, Jimmy's concern w/ the suggested one and my
> >> difficulty fitting the image as is against our current logo as well as
> an
> >> offline conversation with our other Nick makes it plain we are going to
> >> deal head-on with how it is represented.
> >>
> >> I'll be back with some thing for folks to vote on (will rope in a few of
> >> the interested parties composing a set).
> >>
> >> St.Ack
> >>
> >
> >
>


[jira] [Created] (HBASE-11194) [AccessController] issue with covering permission check in case of concurrent op on same row

2014-05-16 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-11194:
--

 Summary: [AccessController] issue with covering permission check 
in case of concurrent op on same row
 Key: HBASE-11194
 URL: https://issues.apache.org/jira/browse/HBASE-11194
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3


The issue is the hook where we do check in which we have not acquired rowlock. 
Take case of delete, we do the check in the preDelete() hook. We do get the 
covering cells and check against their acls. At the point of the preDelete 
hook, we have not acquired the row lock on the deleting row.

Consider 2 parallel threads one doing put and other delete both dealing with 
same row.
Thread 1 acquired the rowlock and decided the TS  (HRS time) and doing the 
memstore write and HLog sync but the mvcc read point is NOT advanced. 
Thread 2 at same time, doing the delete of the row (Say with latest TS . The 
intent is to delete entire row) and in place of preDelete hook. There is no row 
locking happening at this point. As part of covering permission check, it doing 
a Get. But as said above, the put is not complete and the mvcc advance has not 
happened. So the Get won’t return the new cell.  It will return the old cells. 
And the check pass for the old cells.  Now suppose the new cell ACL is not 
matching for the deleting user.  But the cell was not read, the check has not 
happened.  So the ACL check will allow the user  to delete row..  The flow 
later comes to HRegion#doMiniBatchMutate() and try acquire row lock and by that 
time the Thread 1 op was over. So it will get lock and will add the delete 
tombstone.  As a result the cell, for which the deleting user has no acl right, 
also will get deleted.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread Stack
On Wed, May 14, 2014 at 6:13 PM, Andrew Purtell  wrote:

> Because coprocessor APIs are so tightly bound with internals, if we apply
> suggested rules like as mentioned on HBASE-11054:
>
>   I'd say policy should be no changes to method apis across minor
> versions
>
> This will lock coprocessor based components to the limitations of the API
> as we encounter them. Core code does not suffer this limitation, we are
> otherwise free to refactor and change internal methods. For example, if we
> apply this policy to the 0.98 branch, then we will have to abandon further
> security feature development there and move to trunk only. This is because
> we already are aware that coprocessor APIs as they stand are insufficient
> still.
>
>
The above quote is mine.

I had just read a user mail on the phoenix list where someone thought that
phoenix had been broken going from 0.98.1 to 0.98.2 (apparently its fine).

Lets write up agreement.  We've talked this topic a bunch.

1. No guarantees minor version to minor version?  APIs can be broken any
time.
2. API may change across minor versions but SHOULD not break existing users
(No guarantees!)?
3. API may change across minor versions but WILL not break existing users?

Any other permutations? (I'm good w/ #2 and #3.  #1 will just make it so no
one will use CPs).



> Coprocessor APIs are a special class of internal method. We have had a
> tension between allowing freedom of movement for developing them out and
> providing some measure of stability for implementors for a while.
>
> It is my belief that the way forward is something like HBASE-11125. Perhaps
> we can take this discussion to that JIRA and have this long overdue
> conversation.
>
>
Sounds good.



> Regarding security features specifically, I would also like to call your
> attention to HBASE-11127. I think security has been an optional feature
> long enough, it is becoming a core requirement for the project, so should
> be moved into core. Sure, we can therefore sidestep any issues with
> coprocessor API sufficiency for hosting security features. However, in my
> opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> provide the relative stability long asked for by coprocessor API users, the
> latter to cleanly solve emerging issues with concurrency and versioning.
>

+1 on HBASE-11127.
St.Ack


[jira] [Created] (HBASE-11193) hbase web UI show wrong Catalog Table Description

2014-05-16 Thread Guo Ruijing (JIRA)
Guo Ruijing created HBASE-11193:
---

 Summary: hbase web UI show wrong Catalog Table Description
 Key: HBASE-11193
 URL: https://issues.apache.org/jira/browse/HBASE-11193
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Guo Ruijing


On security cluster, check the hbase master web page and look into 'Catalog 
Tables' on 'Tables' Section, the Description for 'hbase:acl' table is not 
expected:
–
Table Name Description
*hbase:acl The .NAMESPACE. table holds information about namespaces.*
hbase:meta The hbase:meta table holds references to all User Table regions
hbase:namespace The .NAMESPACE. table holds information about namespaces



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread Nicolas Liochon
Hi,

(With Apache still lagging on mails, it may be difficult to have a
discussion...)

For 1.0+, I think that registering observer as proposed in 11125 works well.
For 0.98, could we do something like this?
 - new coprocessor hooks can be added between minor releases
 - existing coprocessors hooks are not removed between minor releases
 - a coprocessor can extend the default implementation. Binary
compatibility when migrating to a newer minor release is ensured.
 - a coprocessor can implement directly the interface, but in this case the
application needs to be updated and recompiled between minor releases .
 - new hooks are always tagged with @since. This helps the coprocessor
developer if he needs to support multiple minor version.
 - between major release, everything can happen.

fwiw, Java 8 supports default implementations in interfaces:
http://docs.oracle.com/javase/tutorial/java/IandI/defaultmethods.html

Cheers,

Nicolas





On Thu, May 15, 2014 at 3:13 AM, Andrew Purtell  wrote:

> Because coprocessor APIs are so tightly bound with internals, if we apply
> suggested rules like as mentioned on HBASE-11054:
>
>   I'd say policy should be no changes to method apis across minor
> versions
>
> This will lock coprocessor based components to the limitations of the API
> as we encounter them. Core code does not suffer this limitation, we are
> otherwise free to refactor and change internal methods. For example, if we
> apply this policy to the 0.98 branch, then we will have to abandon further
> security feature development there and move to trunk only. This is because
> we already are aware that coprocessor APIs as they stand are insufficient
> still.
>
> Coprocessor APIs are a special class of internal method. We have had a
> tension between allowing freedom of movement for developing them out and
> providing some measure of stability for implementors for a while.
>
> It is my belief that the way forward is something like HBASE-11125. Perhaps
> we can take this discussion to that JIRA and have this long overdue
> conversation.
>
> Regarding security features specifically, I would also like to call your
> attention to HBASE-11127. I think security has been an optional feature
> long enough, it is becoming a core requirement for the project, so should
> be moved into core. Sure, we can therefore sidestep any issues with
> coprocessor API sufficiency for hosting security features. However, in my
> opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> provide the relative stability long asked for by coprocessor API users, the
> latter to cleanly solve emerging issues with concurrency and versioning.
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>


[jira] [Created] (HBASE-11192) HBase ClusterId File Empty Check Loggic

2014-05-16 Thread sunjingtao (JIRA)
sunjingtao created HBASE-11192:
--

 Summary: HBase ClusterId File Empty Check Loggic
 Key: HBASE-11192
 URL: https://issues.apache.org/jira/browse/HBASE-11192
 Project: HBase
  Issue Type: Bug
 Environment: HBase 0.94+Hadoop2.2.0+Zookeeper3.4.5
Reporter: sunjingtao


if the clusterid file exists but empty ,then the following check logic in the 
MasterFileSystem.java has none effects.
if (!FSUtils.checkClusterIdExists(fs, rd, c.getInt(
HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000))) {
  FSUtils.setClusterId(fs, rd, UUID.randomUUID().toString(), c.getInt(
  HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000));
}
clusterId = FSUtils.getClusterId(fs, rd);
because the checkClusterIdExists method only check the path .
Path filePath = new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME);
return fs.exists(filePath);

in my case ,the file exists but is empty,so the readed clusterid is null which 
cause a nullPointerException:

java.lang.NullPointerException
at org.apache.hadoop.hbase.util.Bytes.toBytes(Bytes.java:441)
at 
org.apache.hadoop.hbase.zookeeper.ClusterId.setClusterId(ClusterId.java:72)
at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:581)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
at java.lang.Thread.run(Thread.java:745)

is this a bug?please make sure!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2014-05-16 Thread churro morales (JIRA)
churro morales created HBASE-11195:
--

 Summary: Potentially improve block locality during major 
compaction for old regions
 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.19
Reporter: churro morales


This might be a specific use case.  But we have some regions which are no 
longer written to (due to the key).  Those regions have 1 store file and they 
are very old, they haven't been written to in a while.  We still use these 
regions to read from so locality would be nice.  

I propose putting a configuration option: something like
hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]

such that you can decide whether or not to skip major compaction for an old 
region with a single store file.

I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11144) Filter to support scan multiple row key ranges

2014-05-16 Thread Li Jiajia (JIRA)
Li Jiajia created HBASE-11144:
-

 Summary: Filter to support scan multiple row key ranges
 Key: HBASE-11144
 URL: https://issues.apache.org/jira/browse/HBASE-11144
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Reporter: Li Jiajia
 Attachments: MultiRowRangeFilter.patch

Filter to support scan multiple row key ranges. It can construct the row key 
ranges from the passed list which can be accessed by each region server. 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11184) why initialize the thread pool with one thread for closing store files

2014-05-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-11184.
---

Resolution: Invalid

Resolving as invalid.

This kind of question best belongs on the user or dev lists.

Yeah, the notiion is that you can ask to run with more closers.  We don't list 
every config in the hbase-default.xml because it would overwhelm and confuse.  
Some configs are in code only.  If you have gone to the trouble of reading the 
code, then you'll likely understand what the particular minor config can do for 
you.

> why initialize the thread pool with one thread for closing store files
> --
>
> Key: HBASE-11184
> URL: https://issues.apache.org/jira/browse/HBASE-11184
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 0.94.17
>Reporter: Zhang Jingpeng
>Priority: Trivial
>
> when i read Store close() method i found  the thread pool for closing store 
> files  with only one Thread.
> why initialize the thread pool with only one thread for closing store files?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


hbase type encodings discussion part 2.

2014-05-16 Thread Jonathan Hsieh
Below is a summary from a follow up conversation to the previous pow-wow
[1] at the post hbasecon 2014 hackthon about an interoperable proposed
encoding scheme for storing typed data in hbase. [2]  Raw notes available
here [3].

Thanks,
Jon.



5/14/14
Attendees via phone: Nick Dimiduk, Ryan Blue, Jon Hsieh, Michael Stack,
Enis Soztutar, and James Taylor.

The group decided to first define requirements for the encoding. The group
recommends these as requirements for the chosen value encoding.

1) must have a memcomparable rowkey.
2) must have null value distinct from empty sting in row key
3) must be able to add nullable fields to end of primary key
4) must either have
  a) indexable fields must be nullable, or
  b) any type that doesn't support nullability must be translatable to a
type that does without data loss. (e.g. fixed width int translate to a
nullable numeric)
5) all char types will be stored by a varlength binary (support chars that
are >1 byte.)
6) fixed length binary values (e.g. md5's) should be a special case but
supported.  caveat emptor -- if you lose your schema, its your fault.
 (won't be able to decipher without schema).

Discussion: varbinary in-key encoding options:
1) single \0 byte terminator with no \0 allowed (phoenix style)
2) run length encoded \0's with two byte terminator (proposed)
3) 8 bytes for every 7 bytes "varblob" encoding (ordered bytes style)
Recommendation: run length encoded \0 with two byte terminator. (handles
nulls, easily human readable, likely low overhead common)

Discussion: mulitpart key encodings
1) tagged bytes - includes field position + type tag
2) type ordinals - ordered bytes encodings.
Recommendation: use position+ type tag approach.


Discussion: data type api:
- we like the goal of the data type api (HBASE-8693), will try to use api
for proposed key encoding api.
- the arbitrary precision numeric type provides advantages.
- will try to implement encodings by modding or patterning off of the
OrderedBytes implementation.
- jon to try plumbing a type through the data types api and into phoenix to
enable existing phoenix queries to see end-to-end perf impact.

Remaining topics and followup:
- how to handle "complex primitives" such as datetime, decimal and bigint.
- more discussion on list.
- plan to present updates and continue discussion at hadoop summit hbase
bof session thursday 6/5 in san jose [4]

[1]
http://mail-archives.us.apache.org/mod_mbox/hbase-dev/201405.mbox/%3CCAAha9a3WQm7cbSAMHifb_e25fSrwYHutqtxnKi9rxOrn5w%2BMVA%40mail.gmail.com%3E
[2]
https://docs.google.com/a/cloudera.com/document/d/15INOaxyifycpFxvB6xNdoj96JbOpslEywjkaoG8MURE/edit#heading=h.o1cgqtsqgqyg
[3]
https://docs.google.com/document/d/1BJooXphOduuPJHEfd3dAeF9Md3Zp2kRG31WExI4l88E/edit#
[4]
http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/?fromJoin=179081342

-- 
// Jonathan Hsieh (shay)
// HBase Tech Lead, Software Engineer, Cloudera
// j...@cloudera.com // @jmhsieh


[jira] [Created] (HBASE-11190) Fix easy typos in documentation

2014-05-16 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-11190:
---

 Summary: Fix easy typos in documentation
 Key: HBASE-11190
 URL: https://issues.apache.org/jira/browse/HBASE-11190
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
Priority: Trivial
 Attachments: HBASE-11190.patch





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11196) Update description of -ROOT- in ref guide

2014-05-16 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-11196:
---

 Summary: Update description of -ROOT- in ref guide
 Key: HBASE-11196
 URL: https://issues.apache.org/jira/browse/HBASE-11196
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Dima Spivak


Since the resolution of 
[HBASE-3171|https://issues.apache.org/jira/browse/HBASE-3171], -ROOT- is no 
longer used to store the location(s) of .META. . Unfortunately, not all of [our 
documentation|http://hbase.apache.org/book/arch.catalog.html] has been updated 
to reflect this change in architecture.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread Michael Segel
Until you move the coprocessor out of the RS space and into its own sandbox… 
saying security and coprocessor in the same sentence is a joke. 
Oh wait… you were serious… :-(

I’d say there’s a significant rethink on coprocessors that’s required.

Anyone running a secure (kerberos) cluster, will want to allow system 
coprocessors but then write a coprocessor that reject user coprocessors. 

Just putting it out there… 

On May 15, 2014, at 2:13 AM, Andrew Purtell  wrote:

> Because coprocessor APIs are so tightly bound with internals, if we apply
> suggested rules like as mentioned on HBASE-11054:
> 
>  I'd say policy should be no changes to method apis across minor
> versions
> 
> This will lock coprocessor based components to the limitations of the API
> as we encounter them. Core code does not suffer this limitation, we are
> otherwise free to refactor and change internal methods. For example, if we
> apply this policy to the 0.98 branch, then we will have to abandon further
> security feature development there and move to trunk only. This is because
> we already are aware that coprocessor APIs as they stand are insufficient
> still.
> 
> Coprocessor APIs are a special class of internal method. We have had a
> tension between allowing freedom of movement for developing them out and
> providing some measure of stability for implementors for a while.
> 
> It is my belief that the way forward is something like HBASE-11125. Perhaps
> we can take this discussion to that JIRA and have this long overdue
> conversation.
> 
> Regarding security features specifically, I would also like to call your
> attention to HBASE-11127. I think security has been an optional feature
> long enough, it is becoming a core requirement for the project, so should
> be moved into core. Sure, we can therefore sidestep any issues with
> coprocessor API sufficiency for hosting security features. However, in my
> opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> provide the relative stability long asked for by coprocessor API users, the
> latter to cleanly solve emerging issues with concurrency and versioning.
> 
> 
> -- 
> Best regards,
> 
>   - Andy
> 
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)



[jira] [Resolved] (HBASE-11188) "Inconsistent configuration" for SchemaMetrics is always shown

2014-05-16 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-11188.


  Resolution: Fixed
Release Note: 
Region servers with the default value for hbase.metrics.showTableName will stop 
showing the error message "ERROR 
org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: Inconsistent 
configuration. Previous configuration for using table name in metrics: true, 
new configuration: false".
Region servers configured with hbase.metrics.showTableName=false should now get 
a message like this one: "ERROR 
org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: Inconsistent 
configuration. Previous configuration for using table name in metrics: false, 
new configuration: true", and it's nothing to be concerned about.
Hadoop Flags: Reviewed

Grazie Matteo, I committed the patch to 0.94

> "Inconsistent configuration" for SchemaMetrics is always shown
> --
>
> Key: HBASE-11188
> URL: https://issues.apache.org/jira/browse/HBASE-11188
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.94.19
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
> Fix For: 0.94.20
>
> Attachments: HBASE-11188-0.94-v2.patch, HBASE-11188-0.94.patch
>
>
> Some users have been complaining about this message:
> {noformat}
> ERROR org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: 
> Inconsistent configuration. Previous configuration for using table name in 
> metrics: true, new configuration: false
> {noformat}
> The interesting thing is that we see it with default configurations, which 
> made me think that some code path must have been passing the wrong thing. I 
> found that if SchemaConfigured is passed a null Configuration in its 
> constructor that it will then pass null to SchemaMetrics#configureGlobally 
> which will interpret useTableName as being false:
> {code}
>   public static void configureGlobally(Configuration conf) {
> if (conf != null) {
>   final boolean useTableNameNew =
>   conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
>   setUseTableName(useTableNameNew);
> } else {
>   setUseTableName(false);
> }
>   }
> {code}
> It should be set to true since that's the new default, meaning we missed it 
> in HBASE-5671.
> I found one code path that passes a null configuration, StoreFile.Reader 
> extends SchemaConfigured and uses the constructor that only passes a Path, so 
> the Configuration is set to null.
> I'm planning on just passing true instead of false, fixing the problem for 
> almost everyone (those that disable this feature will get the error message). 
> IMO it's not worth more efforts since it's a 0.94-only problem and it's not 
> actually doing anything bad.
> I'm closing both HBASE-10990 and HBASE-10946 as duplicates.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread James Taylor
+1 to HBASE-11125. Current incarnation of coprocessors expose too much of
the guts of the implementation.


On Wed, May 14, 2014 at 6:13 PM, Andrew Purtell  wrote:

> Because coprocessor APIs are so tightly bound with internals, if we apply
> suggested rules like as mentioned on HBASE-11054:
>
>   I'd say policy should be no changes to method apis across minor
> versions
>
> This will lock coprocessor based components to the limitations of the API
> as we encounter them. Core code does not suffer this limitation, we are
> otherwise free to refactor and change internal methods. For example, if we
> apply this policy to the 0.98 branch, then we will have to abandon further
> security feature development there and move to trunk only. This is because
> we already are aware that coprocessor APIs as they stand are insufficient
> still.
>
> Coprocessor APIs are a special class of internal method. We have had a
> tension between allowing freedom of movement for developing them out and
> providing some measure of stability for implementors for a while.
>
> It is my belief that the way forward is something like HBASE-11125. Perhaps
> we can take this discussion to that JIRA and have this long overdue
> conversation.
>
> Regarding security features specifically, I would also like to call your
> attention to HBASE-11127. I think security has been an optional feature
> long enough, it is becoming a core requirement for the project, so should
> be moved into core. Sure, we can therefore sidestep any issues with
> coprocessor API sufficiency for hosting security features. However, in my
> opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> provide the relative stability long asked for by coprocessor API users, the
> latter to cleanly solve emerging issues with concurrency and versioning.
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>


[jira] [Resolved] (HBASE-10513) Provide user documentation for region replicas

2014-05-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-10513.
---

   Resolution: Fixed
Fix Version/s: (was: 0.99.0)
   hbase-10070
 Hadoop Flags: Reviewed

I've committed this. Thanks Stack and Devaraj for review. 

> Provide user documentation for region replicas
> --
>
> Key: HBASE-10513
> URL: https://issues.apache.org/jira/browse/HBASE-10513
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: UserdocumentationforHBASE-10070.pdf, 
> hbase-10513_v1.patch, timeline_consitency.png
>
>
> We need some documentation for the feature introduced in HBASE-10070. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread Ted Yu
Nicolas:
Can you give an example of using @since to tag new hooks ?

I searched hadoop and hbase codebase but didn't seem to find such
annotation.

Cheers


On Fri, May 16, 2014 at 1:18 AM, Nicolas Liochon  wrote:

> Hi,
>
> (With Apache still lagging on mails, it may be difficult to have a
> discussion...)
>
> For 1.0+, I think that registering observer as proposed in 11125 works
> well.
> For 0.98, could we do something like this?
>  - new coprocessor hooks can be added between minor releases
>  - existing coprocessors hooks are not removed between minor releases
>  - a coprocessor can extend the default implementation. Binary
> compatibility when migrating to a newer minor release is ensured.
>  - a coprocessor can implement directly the interface, but in this case the
> application needs to be updated and recompiled between minor releases .
>  - new hooks are always tagged with @since. This helps the coprocessor
> developer if he needs to support multiple minor version.
>  - between major release, everything can happen.
>
> fwiw, Java 8 supports default implementations in interfaces:
> http://docs.oracle.com/javase/tutorial/java/IandI/defaultmethods.html
>
> Cheers,
>
> Nicolas
>
>
>
>
>
> On Thu, May 15, 2014 at 3:13 AM, Andrew Purtell 
> wrote:
>
> > Because coprocessor APIs are so tightly bound with internals, if we apply
> > suggested rules like as mentioned on HBASE-11054:
> >
> >   I'd say policy should be no changes to method apis across minor
> > versions
> >
> > This will lock coprocessor based components to the limitations of the API
> > as we encounter them. Core code does not suffer this limitation, we are
> > otherwise free to refactor and change internal methods. For example, if
> we
> > apply this policy to the 0.98 branch, then we will have to abandon
> further
> > security feature development there and move to trunk only. This is
> because
> > we already are aware that coprocessor APIs as they stand are insufficient
> > still.
> >
> > Coprocessor APIs are a special class of internal method. We have had a
> > tension between allowing freedom of movement for developing them out and
> > providing some measure of stability for implementors for a while.
> >
> > It is my belief that the way forward is something like HBASE-11125.
> Perhaps
> > we can take this discussion to that JIRA and have this long overdue
> > conversation.
> >
> > Regarding security features specifically, I would also like to call your
> > attention to HBASE-11127. I think security has been an optional feature
> > long enough, it is becoming a core requirement for the project, so should
> > be moved into core. Sure, we can therefore sidestep any issues with
> > coprocessor API sufficiency for hosting security features. However, in my
> > opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> > provide the relative stability long asked for by coprocessor API users,
> the
> > latter to cleanly solve emerging issues with concurrency and versioning.
> >
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
>


[jira] [Resolved] (HBASE-10946) hbase.metrics.showTableName cause run “hbase xxx.Hfile” report Inconsistent configuration

2014-05-16 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-10946.


Resolution: Duplicate

I'm fixing this in HBASE-11188.

> hbase.metrics.showTableName  cause run “hbase xxx.Hfile”  report Inconsistent 
> configuration
> ---
>
> Key: HBASE-10946
> URL: https://issues.apache.org/jira/browse/HBASE-10946
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.17
> Environment:  hadoop 1.2.1  hbase0.94.17
>Reporter: Zhang Jingpeng
>Priority: Minor
>
> when i run hbase org.apache.hadoop.hbase.io.hfile.Hfile ,print the error info
> "ERROR metrics.SchemaMetrics: Inconsistent configuration. Previous 
> configuration for using table name in metrics: true, new configuration: false"
> I find the report section in Hfile -->setUseTableName method
> and to be called by next
>  final boolean useTableNameNew =
>   conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
>   setUseTableName(useTableNameNew);
> but  the hbase-default.xml is hbase.metrics.showTableName
> true



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11186) Improve TestExportSnapshot verifications

2014-05-16 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-11186:
---

 Summary: Improve TestExportSnapshot verifications
 Key: HBASE-11186
 URL: https://issues.apache.org/jira/browse/HBASE-11186
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0
 Attachments: HBASE-11186-v0.patch

* Remove some code by using the utils that we already have in 
SnapshotTestingUtil
* Add an Export with references for both v1 and v2 format
* add the verification on the actual number of files exported



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11187) [89-fb] Limit the number of client threads per regionserver

2014-05-16 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-11187:
-

 Summary: [89-fb] Limit the number of client threads per 
regionserver
 Key: HBASE-11187
 URL: https://issues.apache.org/jira/browse/HBASE-11187
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.89-fb
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.89-fb


In the client each HTable can create one or more threads per region server.  
When there are lots of HTables and a region server is slow this can result in 
an explosion of blocked threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10810) LoadTestTool should share the connection and connection pool

2014-05-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-10810.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed this to branch. Thanks Nick and Devaraj for review. 

> LoadTestTool should share the connection and connection pool
> 
>
> Key: HBASE-10810
> URL: https://issues.apache.org/jira/browse/HBASE-10810
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10810_v1.patch
>
>
> While running the IT test from HBASE-10572, we've noticed that the number of 
> threads jumps to 4K's when CM actions are going on. 
> Our [~ndimiduk] summarizes the problem quite good: 
> MultiThreadedReader creates this pool for each HTable:
> {code}
> ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 
> keepAliveTime, TimeUnit.SECONDS,
> new SynchronousQueue(), 
> Threads.newDaemonThreadFactory("htable"));
> {code}
> This comes from the HTable creation
> {code}  
> public HTable(Configuration conf, final TableName tableName)
> {code}
> As well the javadoc says Recommended.
> This is wrong.
> In this issue we can change the LTT sub classes to use the shared connection 
> object and initialize their tables using HConnection.getTable() rather than 
> new HTable(). 
> This is relevant to trunk as well, but there since there is only one 
> outstanding RPC per thread, it is not such a big problem. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread Vladimir Rodionov
1) Have default implementations (abstract classes) for every interface from
Coprocessor API.
2) Advise coprocessor users not to implement interface directly but sub
class default impl.
3) Preserve backward compatibility by adding only new hooks/methods
4) DO NOT CHANGE existing API (no method renaming, method parameter type
changes etc)
5) Have a regression tests to check backward compatibility.

-Vladimir



On Fri, May 16, 2014 at 9:13 AM, Michael Segel wrote:

> Until you move the coprocessor out of the RS space and into its own
> sandbox… saying security and coprocessor in the same sentence is a joke.
> Oh wait… you were serious… :-(
>
> I’d say there’s a significant rethink on coprocessors that’s required.
>
> Anyone running a secure (kerberos) cluster, will want to allow system
> coprocessors but then write a coprocessor that reject user coprocessors.
>
> Just putting it out there…
>
> On May 15, 2014, at 2:13 AM, Andrew Purtell  wrote:
>
> > Because coprocessor APIs are so tightly bound with internals, if we apply
> > suggested rules like as mentioned on HBASE-11054:
> >
> >  I'd say policy should be no changes to method apis across minor
> > versions
> >
> > This will lock coprocessor based components to the limitations of the API
> > as we encounter them. Core code does not suffer this limitation, we are
> > otherwise free to refactor and change internal methods. For example, if
> we
> > apply this policy to the 0.98 branch, then we will have to abandon
> further
> > security feature development there and move to trunk only. This is
> because
> > we already are aware that coprocessor APIs as they stand are insufficient
> > still.
> >
> > Coprocessor APIs are a special class of internal method. We have had a
> > tension between allowing freedom of movement for developing them out and
> > providing some measure of stability for implementors for a while.
> >
> > It is my belief that the way forward is something like HBASE-11125.
> Perhaps
> > we can take this discussion to that JIRA and have this long overdue
> > conversation.
> >
> > Regarding security features specifically, I would also like to call your
> > attention to HBASE-11127. I think security has been an optional feature
> > long enough, it is becoming a core requirement for the project, so should
> > be moved into core. Sure, we can therefore sidestep any issues with
> > coprocessor API sufficiency for hosting security features. However, in my
> > opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> > provide the relative stability long asked for by coprocessor API users,
> the
> > latter to cleanly solve emerging issues with concurrency and versioning.
> >
> >
> > --
> > Best regards,
> >
> >   - Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
>
>


[jira] [Resolved] (HBASE-4259) Investigate different memory allocation models for off heap caching.

2014-05-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-4259.
--

Resolution: Later

Resolving later.  No movement in years.

> Investigate different memory allocation models for off heap caching.
> 
>
> Key: HBASE-4259
> URL: https://issues.apache.org/jira/browse/HBASE-4259
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Li Pi
>Assignee: Li Pi
>Priority: Minor
>
> Currently, the off heap cache uses Memcached's allocation model, which works 
> reasonably well, but other memory allocation models, such as fragmented 
> writes, or buddy allocation, may be better suited to the task, and require 
> less configuration from the users perspective.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11138) [0.89-fb] Check for an empty split key before creating a table

2014-05-16 Thread Gaurav Menghani (JIRA)
Gaurav Menghani created HBASE-11138:
---

 Summary: [0.89-fb] Check for an empty split key before creating a 
table
 Key: HBASE-11138
 URL: https://issues.apache.org/jira/browse/HBASE-11138
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.89-fb
Reporter: Gaurav Menghani
Assignee: Gaurav Menghani
Priority: Minor
 Fix For: 0.89-fb


HBaseAdmin#checkSplitKeys() doesn't check for empty split keys, which can cause 
multiple regions with the same start key (i.e., ""). This diff fixes that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [common type encoding breakout] Re: HBase Hackathon @ Salesforce 05/06/2014 notes

2014-05-16 Thread Ryan Blue

On 05/15/2014 09:32 AM, James Taylor wrote:

@Ryan & Jon - thanks again for pursuing this - I think it'll be a big
improvement.

IMHO, it'd be good to add a Requirements section to the doc. If the
current Phoenix type system meets those requirements, then why not just
go with that?


Good idea. Part of the problem has been that we don't all have a clear 
picture of goals. Places where I think we need to come up with answers:


1. Are we targeting a backward-compatible encoding that can be used on 
existing tables?


  My answer: No, because this would dramatically increase the required 
size of implementations. Supporting existing Phoenix tables (and the 
UNSIGNED types) should be a separate issue. Also: as the experts in 
using the current Phoenix encoding, what would you like to fix?


2. Are we going to include choices for encoding for specific types, or 
are we going to choose one?


  My answer: Choose one. This is what the DataType (or similar) APIs 
are for. This is just one encoding spec and there can be more.


Let's talk about these today, as well as some of the trade-offs of the 
Phoenix encoding to figure out those requirements. It is very similar to 
the proposed encoding, except that VARCHAR and BINARY are treated 
differently and the additional tracking bytes in the key are type 
ordinals and not field position-based tags. Basically, can we live with 
variable-length binary only at the end of the key, or do we need a 
requirement that it can be any field?



I think we need a binary serialization spec that includes compound keys
in the row key plus all the SQL primitive data types that we want to


I'm not sure I understand. What does the current spec not support that 
it should?



support (minimally all the SQL types that Phoenix currently supports).


I agree. The current spec supports all of the current Phoenix types, 
minus the backward-compatible types based on Bytes. If there are types 
missing from the list at the end of the doc, please add them or tell me 
which ones so that I can.


I also clarified in the doc why there are few memcmp encodings, but this 
does not limit the types in the spec. Is this clear enough?


For the UNSIGNED Bytes types, I'm fine adding them if we need to for 
backward-compatibility. This comes down to whether this encoding is 
going to be used along-side existing data in the same table or if it 
will be a new table format.


rb

--
Ryan Blue
Software Engineer
Cloudera, Inc.


[jira] [Created] (HBASE-11189) Subprocedure should be marked as complete upon failure

2014-05-16 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11189:
--

 Summary: Subprocedure should be marked as complete upon failure
 Key: HBASE-11189
 URL: https://issues.apache.org/jira/browse/HBASE-11189
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
 Attachments: 11189-v1.txt

ProcedureMember#submitSubprocedure() uses the following check:
{code}
  if (!rsub.isComplete()) {
LOG.error("Subproc '" + procName + "' is already running. Bailing out");
return false;
  }
{code}
If a subprocedure of that name previously ran but failed, its complete field 
would stay false, leading to early bailout.

A failed subprocedure should mark itself complete.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11185) Parallelize Snapshot operations

2014-05-16 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-11185:
---

 Summary: Parallelize Snapshot operations
 Key: HBASE-11185
 URL: https://issues.apache.org/jira/browse/HBASE-11185
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0
 Attachments: HBASE-11185-v0.patch

when SnapshotInfo or snapshot verification is executed against a remote path, 
it may takes a while since all the code is mainly composed by sequential calls 
to the fs.
This patch will parallelize all the snapshot operations using a thread pool to 
dispatch requests. The size of the pool is tunable by using  
"hbase.snapshot.thread.pool.max"



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11142) Taking snapshots can leave sockets on the master stuck in CLOSE_WAIT state

2014-05-16 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-11142:
--

 Summary: Taking snapshots can leave sockets on the master stuck in 
CLOSE_WAIT state
 Key: HBASE-11142
 URL: https://issues.apache.org/jira/browse/HBASE-11142
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.2
Reporter: Andrew Purtell


As reported by Hansi Klose on user@. 
{quote}
we use a script to take on a regular basis snapshot's and delete old one's.
We recognizes that the web interface of the hbase master was not working any 
more because of too many open files.
The master reaches his number of open file limit of 32768
When I run lsof I saw that there where a lot of TCP CLOSE_WAIT handles open 
with the regionserver as target.
On the regionserver there is just one connection to the hbase master.
I can see that the count of the CLOSE_WAIT handles grow each time
i take a snapshot. When i delete on nothing changes.
Each time i take a snapshot  there are 20 - 30 new CLOSE_WAIT handles.
{quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: jetty 9 on hbase? (HADOOP-10075)

2014-05-16 Thread Ted Yu
HADOOP-10075 is not integrated yet.
It doesn't have Fix Version(s) either.

For HBase, you can change the version strings in pom.xml to match the ones
from your hadoop build :
6.1.26
6.1.14

Cheers


On Fri, May 16, 2014 at 2:49 PM, Demai Ni  wrote:

> hi, folks,
>
> wondering anyone already began to use Jetty 9 with HBase. We are trying to
> use Jetty9 in hadoop, and got the following exceptions when start HBase:
> ...
> java.lang.ClassCastException:
> org.eclipse.jetty.servlet.ServletContextHandler incompatible with
> org.mortbay.jetty.servlet.Context
> at
>
> org.apache.hadoop.hbase.util.InfoServer.fixupLogsServletLocation(InfoServer.java:74)
> ...
>
> caused by the mis-match of Jetty9 and Jetty6(current HBase dependency).
>
> On hadoop side HADOOP-10075 is pending. I am thinking about changing hbase
> code(0.96) to bypass the problem.
>
> Many thanks.
>
> Demai
>


Re: [common type encoding breakout] Re: HBase Hackathon @ Salesforce 05/06/2014 notes

2014-05-16 Thread James Taylor
@Ryan & Jon - thanks again for pursuing this - I think it'll be a big
improvement.

IMHO, it'd be good to add a Requirements section to the doc. If the current
Phoenix type system meets those requirements, then why not just go with
that?

I think we need a binary serialization spec that includes compound keys in
the row key plus all the SQL primitive data types that we want to support
(minimally all the SQL types that Phoenix currently supports).

@Nick - I like the abstraction of the DataType, but that doesn't solve the
problem for non Java usage. I'm also a bit worried that it might become a
bottleneck for implementors of the serialization spec as there are many
different platform specific operations that will likely be done on the row
key. We can try to get everything necessary in the DataType interface, but
I suspect that implementors will need to go under-the-covers at times
(rather than waiting for another release of the module that defines the
DataType interface) - might become a bottleneck.

Thanks,
James


On Wed, May 14, 2014 at 5:17 PM, Nick Dimiduk  wrote:

> On Tue, May 13, 2014 at 3:35 PM, Ryan Blue  wrote:
>
>
> > I think there's a little confusion in what we are trying to accomplish.
> > What I want to do is to write a minimal specification for how to store a
> > set of types. I'm not trying to leave much flexibility, what I want is
> > clarity and simplicity.
> >
>
> This is admirable and was my initial goal as well. The trouble is, you
> cannot please everyone, current users and new. So, we decided it was better
> to provide a pluggable framework for extension + some basic implementations
> than to implement a closed system.
>
> This is similar to OrderedBytes work, but a subset of it. A good example is
> > that while it's possible to use different encodings (avro, protobuf,
> > thrift, ...) it isn't practical for an application to support all of
> those
> > encodings. So for interoperability between Kite, Phoenix, and others, I
> > want a set of requirements that is as small as possible.
> >
>
> Minimal is good. The surface area of o.a.h.h.types is as large as it is
> because there was always "just one more" type to support or encoding to
> provide.
>
> To make the requirements small, I used off-the-shelf protobuf [1] plus a
> > small set of memcmp encodings: ints, floats, and binary. That way, we
> don't
> > have to talk about how to make a memcmp Date in bytes, for example. A
> Date
> > is an int, which we know how to encode, and we can agree separately on
> how
> > to a Date is represented (e.g., Julian vs unix epoch). [2] The same
> applies
> > to binary, where the encoding handles sorting and nulls, but not
> charsets.
> >
>
> I think you should focus on the primitives you want to support. The
> compound type stuff (ie, "rowkey encodings") is a can of worms because you
> need to support existing users, new users, novice users, and advanced
> users. Hence the interop between the DataType interface and the Struct
> classes. These work together to support all of these use-cases with the
> same basic code. For example, the protobuf encoding of postion|wire-type +
> encoded value is easily implemented using Struct.
>
> I firmly believe that we cannot dictate rowkey composition. Applications,
> however, are free to implement their own. By using the common DataType
> interface, they can all interoperate.
>
> This is the largest reason why I didn't include OrderedBytes directly in
> > the spec. For example, OB includes a varint that I don't think is
> needed. I
> > don't object to its inclusion in OB, but I think it isn't a necessary
> > requirement for implementing this spec.
> >
>
> Again, the surface area is as it is because of community consensus during
> the first phase of implementation. That consensus disagrees with you.
>
> I think there are 3 things to clear up:
> > 1. What types from OB are not included, and why?
> > 2. Why not use OB-style structs?
> > 3. Why choose protobuf for complex records?
> >
> > Does that sound like a reasonable direction to head with this discussion?
> >
>
> Yes, sounds great!
>
> As far as the DataType API, I think that works great with what I'm trying
> > to do. We'd build a DataType implementation for the encoding and the API
> > will applications handle the underlying encoding. And other encoding
> > strategies can be swapped in as well, if we want to address shortcomings
> in
> > this one, or have another for a different use case.
> >
>
> I'm quite pleased to hear that. Applications like Kite, Phoenix, Kiji are
> the target audience of the DataType API.
>
> Thank you for picking back up this baton. It's sat for too long.
>
> -n
>
> On 05/13/2014 02:33 PM, Nick Dimiduk wrote:
> >
> >> Breaking off hackathon thread.
> >>
> >> The conversation around HBASE-8089 concluded with two points:
> >>   - HBase should provide support for order-preserving encodings while
> >> not dropping support for the existing encoding formats.
> >>   - HBase is not in the bu

jetty 9 on hbase? (HADOOP-10075)

2014-05-16 Thread Demai Ni
hi, folks,

wondering anyone already began to use Jetty 9 with HBase. We are trying to
use Jetty9 in hadoop, and got the following exceptions when start HBase:
...
java.lang.ClassCastException:
org.eclipse.jetty.servlet.ServletContextHandler incompatible with
org.mortbay.jetty.servlet.Context
at
org.apache.hadoop.hbase.util.InfoServer.fixupLogsServletLocation(InfoServer.java:74)
...

caused by the mis-match of Jetty9 and Jetty6(current HBase dependency).

On hadoop side HADOOP-10075 is pending. I am thinking about changing hbase
code(0.96) to bypass the problem.

Many thanks.

Demai


0.94.20 soon

2014-05-16 Thread lars hofhansl
I'd like to do an RC for 0.94.20 soon. Looks like it'd be a nice and small 
release, only 7 fixes so far. 0.94 is maturing. :)
Anything that should get pulled in?

Thanks.

-- Lars

Hadoop 1.0, 1.1, short circuit reading, and replication

2014-05-16 Thread lars hofhansl
Doing some local testing I noticed that replication does not work with Hadoop 
1.0.x or Hadoop 1.1.x when SSR is enabled.
Was banging my head at it for a while and until I realize what the issues was, 
then I also came across this: https://issues.apache.org/jira/browse/HDFS-2757 
(which J-D fixed).

We should document that replication will not work with Hadoop 1.0 when SSR is 
enabled.

Or am I missing something?

-- Lars

[jira] [Created] (HBASE-11147) Avoid creating List of KeyValue in FilterBase#filterRowCells(List)

2014-05-16 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11147:
--

 Summary: Avoid creating List of KeyValue in 
FilterBase#filterRowCells(List)
 Key: HBASE-11147
 URL: https://issues.apache.org/jira/browse/HBASE-11147
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Priority: Minor


Currently a List of KeyValue's is always created:
{code}
List kvs = new ArrayList(ignored.size());
{code}
When passed ignored List is of KeyValue (which is the only implementation of 
Cell at the moment), the above step should be avoided.

This would reduce creation of short-lived objects.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10990) Running CompressionTest yields unrelated schema metrics configuration errors.

2014-05-16 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-10990.


Resolution: Duplicate

I'm fixing this in HBASE-11188.

> Running CompressionTest yields unrelated schema metrics configuration errors.
> -
>
> Key: HBASE-10990
> URL: https://issues.apache.org/jira/browse/HBASE-10990
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.18
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-10990.patch
>
>
> In a vanilla configuration, running CompressionTest yields the following 
> error:
> sudo -u hdfs hbase org.apache.hadoop.hbase.util.CompressionTest 
> /path/to/hfile gz
> Output:
> 13/03/07 14:49:40 ERROR metrics.SchemaMetrics: Inconsistent configuration. 
> Previous configuration for using table name in metrics: true, new 
> configuration: false



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10357) Failover RPC's for scans

2014-05-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-10357.
---

   Resolution: Fixed
Fix Version/s: (was: 0.99.0)
 Hadoop Flags: Reviewed

Ok, checked the latest version as well and committed to branch on behalf of 
Devaraj (Still having some infra issues). Thanks Stack and Nicolas for looking. 
 

> Failover RPC's for scans
> 
>
> Key: HBASE-10357
> URL: https://issues.apache.org/jira/browse/HBASE-10357
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: hbase-10070
>
> Attachments: 10357-1.txt, 10357-2.txt, 10357-3.2.txt, 10357-3.txt, 
> 10357-4.2.txt, 10357-4.3.1.txt, 10357-4.3.2.txt, 10357-4.3.txt, 
> 10357-4.4.txt, 10357-4.txt
>
>
> This is extension of HBASE-10355 to add failover support for scans. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11193) hbase web UI show wrong Catalog Table Description

2014-05-16 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari resolved HBASE-11193.
-

Resolution: Duplicate

Like Ted said, duplicate of HBASE-10611. Closing as duplicate.

> hbase web UI show wrong Catalog Table Description
> -
>
> Key: HBASE-11193
> URL: https://issues.apache.org/jira/browse/HBASE-11193
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Guo Ruijing
>
> On security cluster, check the hbase master web page and look into 'Catalog 
> Tables' on 'Tables' Section, the Description for 'hbase:acl' table is not 
> expected:
> –
> Table Name Description
> *hbase:acl The .NAMESPACE. table holds information about namespaces.*
> hbase:meta The hbase:meta table holds references to all User Table regions
> hbase:namespace The .NAMESPACE. table holds information about namespaces



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11197) Region could remain unassigned if regionserver crashes

2014-05-16 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11197:
---

 Summary: Region could remain unassigned if regionserver crashes
 Key: HBASE-11197
 URL: https://issues.apache.org/jira/browse/HBASE-11197
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


When looking into test failure: 
testVisibilityLabelsOnKillingOfRSContainingLabelsTable

and find this is what has happened:

1. try to assign a region a region server;
2. master creates a znode, and send an openRegion request to the rs;
3. rs gets the request and sends back a response, then crashed;
4. try to assign the region again with forceNewPlan = true;
5. since the region is in transition, master tries to close it and get region 
server stopped exception;
6. master offlines the region and removes it from transition; but can't assign 
the region since the dead server is not processed;
7. now SSH finally kicks in, tries to assign this region again;
8. SSH will fail to assign it since the znode is there already.

We should clean up the znode in force offline a region.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: jetty 9 on hbase? (HADOOP-10075)

2014-05-16 Thread Demai Ni
Ted, 

Thanks. I am changing the poms (both top and the one under Hbase- server). Due 
to the incompatible btw jetty 6 and 9, infoserver and RESTserver need code 
changes too. Well, something to play with this weekend.

Demai on the run

On May 16, 2014, at 4:11 PM, Ted Yu  wrote:

> HADOOP-10075 is not integrated yet.
> It doesn't have Fix Version(s) either.
> 
> For HBase, you can change the version strings in pom.xml to match the ones
> from your hadoop build :
>6.1.26
>6.1.14
> 
> Cheers
> 
> 
> On Fri, May 16, 2014 at 2:49 PM, Demai Ni  wrote:
> 
>> hi, folks,
>> 
>> wondering anyone already began to use Jetty 9 with HBase. We are trying to
>> use Jetty9 in hadoop, and got the following exceptions when start HBase:
>> ...
>> java.lang.ClassCastException:
>> org.eclipse.jetty.servlet.ServletContextHandler incompatible with
>> org.mortbay.jetty.servlet.Context
>>at
>> 
>> org.apache.hadoop.hbase.util.InfoServer.fixupLogsServletLocation(InfoServer.java:74)
>> ...
>> 
>> caused by the mis-match of Jetty9 and Jetty6(current HBase dependency).
>> 
>> On hadoop side HADOOP-10075 is pending. I am thinking about changing hbase
>> code(0.96) to bypass the problem.
>> 
>> Many thanks.
>> 
>> Demai
>> 


Re: On coprocessor API evolution

2014-05-16 Thread Andrew Purtell
Thanks Vladimir, I added your points to the discussion on HBASE-11125.


On Sat, May 17, 2014 at 1:59 AM, Vladimir Rodionov
wrote:

> 1) Have default implementations (abstract classes) for every interface from
> Coprocessor API.
> 2) Advise coprocessor users not to implement interface directly but sub
> class default impl.
> 3) Preserve backward compatibility by adding only new hooks/methods
> 4) DO NOT CHANGE existing API (no method renaming, method parameter type
> changes etc)
> 5) Have a regression tests to check backward compatibility.
>
> -Vladimir
>
>
>
> On Fri, May 16, 2014 at 9:13 AM, Michael Segel  >wrote:
>
> > Until you move the coprocessor out of the RS space and into its own
> > sandbox… saying security and coprocessor in the same sentence is a joke.
> > Oh wait… you were serious… :-(
> >
> > I’d say there’s a significant rethink on coprocessors that’s required.
> >
> > Anyone running a secure (kerberos) cluster, will want to allow system
> > coprocessors but then write a coprocessor that reject user coprocessors.
> >
> > Just putting it out there…
> >
> > On May 15, 2014, at 2:13 AM, Andrew Purtell  wrote:
> >
> > > Because coprocessor APIs are so tightly bound with internals, if we
> apply
> > > suggested rules like as mentioned on HBASE-11054:
> > >
> > >  I'd say policy should be no changes to method apis across minor
> > > versions
> > >
> > > This will lock coprocessor based components to the limitations of the
> API
> > > as we encounter them. Core code does not suffer this limitation, we are
> > > otherwise free to refactor and change internal methods. For example, if
> > we
> > > apply this policy to the 0.98 branch, then we will have to abandon
> > further
> > > security feature development there and move to trunk only. This is
> > because
> > > we already are aware that coprocessor APIs as they stand are
> insufficient
> > > still.
> > >
> > > Coprocessor APIs are a special class of internal method. We have had a
> > > tension between allowing freedom of movement for developing them out
> and
> > > providing some measure of stability for implementors for a while.
> > >
> > > It is my belief that the way forward is something like HBASE-11125.
> > Perhaps
> > > we can take this discussion to that JIRA and have this long overdue
> > > conversation.
> > >
> > > Regarding security features specifically, I would also like to call
> your
> > > attention to HBASE-11127. I think security has been an optional feature
> > > long enough, it is becoming a core requirement for the project, so
> should
> > > be moved into core. Sure, we can therefore sidestep any issues with
> > > coprocessor API sufficiency for hosting security features. However, in
> my
> > > opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> > > provide the relative stability long asked for by coprocessor API users,
> > the
> > > latter to cleanly solve emerging issues with concurrency and
> versioning.
> > >
> > >
> > > --
> > > Best regards,
> > >
> > >   - Andy
> > >
> > > Problems worthy of attack prove their worth by hitting back. - Piet
> Hein
> > > (via Tom White)
> >
> >
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: jetty 9 on hbase? (HADOOP-10075)

2014-05-16 Thread Ted Yu
You may want to open a JIRA with your findings.

Cheers


On Fri, May 16, 2014 at 7:49 PM, Demai Ni  wrote:

> Ted,
>
> Thanks. I am changing the poms (both top and the one under Hbase- server).
> Due to the incompatible btw jetty 6 and 9, infoserver and RESTserver need
> code changes too. Well, something to play with this weekend.
>
> Demai on the run
>
> On May 16, 2014, at 4:11 PM, Ted Yu  wrote:
>
> > HADOOP-10075 is not integrated yet.
> > It doesn't have Fix Version(s) either.
> >
> > For HBase, you can change the version strings in pom.xml to match the
> ones
> > from your hadoop build :
> >6.1.26
> >6.1.14
> >
> > Cheers
> >
> >
> > On Fri, May 16, 2014 at 2:49 PM, Demai Ni  wrote:
> >
> >> hi, folks,
> >>
> >> wondering anyone already began to use Jetty 9 with HBase. We are trying
> to
> >> use Jetty9 in hadoop, and got the following exceptions when start HBase:
> >> ...
> >> java.lang.ClassCastException:
> >> org.eclipse.jetty.servlet.ServletContextHandler incompatible with
> >> org.mortbay.jetty.servlet.Context
> >>at
> >>
> >>
> org.apache.hadoop.hbase.util.InfoServer.fixupLogsServletLocation(InfoServer.java:74)
> >> ...
> >>
> >> caused by the mis-match of Jetty9 and Jetty6(current HBase dependency).
> >>
> >> On hadoop side HADOOP-10075 is pending. I am thinking about changing
> hbase
> >> code(0.96) to bypass the problem.
> >>
> >> Many thanks.
> >>
> >> Demai
> >>
>


Re: On coprocessor API evolution

2014-05-16 Thread Andrew Purtell
Michael,

As you know, we have implemented security features with coprocessors
precisely because they can be interposed on internal actions to make
authoritative decisions in-process. Coprocessors are a way to have
composable internal extensions. They don't have and probably never will
have magic fairy security dust. We do trust the security coprocessor code
because it was developed by the project. That is not the same thing as
saying you can have 'security' and execute arbitrary user code in-process
as a coprocessor. Just want to clear that up for you.

> will want to allow system coprocessors but then write a coprocessor that
reject user coprocessors.

That's a reasonable point.




On Sat, May 17, 2014 at 12:13 AM, Michael Segel
wrote:

> Until you move the coprocessor out of the RS space and into its own
> sandbox… saying security and coprocessor in the same sentence is a joke.
> Oh wait… you were serious… :-(
>
> I’d say there’s a significant rethink on coprocessors that’s required.
>
> Anyone running a secure (kerberos) cluster, will want to allow system
> coprocessors but then write a coprocessor that reject user coprocessors.
>
> Just putting it out there…
>
> On May 15, 2014, at 2:13 AM, Andrew Purtell  wrote:
>
> > Because coprocessor APIs are so tightly bound with internals, if we apply
> > suggested rules like as mentioned on HBASE-11054:
> >
> >  I'd say policy should be no changes to method apis across minor
> > versions
> >
> > This will lock coprocessor based components to the limitations of the API
> > as we encounter them. Core code does not suffer this limitation, we are
> > otherwise free to refactor and change internal methods. For example, if
> we
> > apply this policy to the 0.98 branch, then we will have to abandon
> further
> > security feature development there and move to trunk only. This is
> because
> > we already are aware that coprocessor APIs as they stand are insufficient
> > still.
> >
> > Coprocessor APIs are a special class of internal method. We have had a
> > tension between allowing freedom of movement for developing them out and
> > providing some measure of stability for implementors for a while.
> >
> > It is my belief that the way forward is something like HBASE-11125.
> Perhaps
> > we can take this discussion to that JIRA and have this long overdue
> > conversation.
> >
> > Regarding security features specifically, I would also like to call your
> > attention to HBASE-11127. I think security has been an optional feature
> > long enough, it is becoming a core requirement for the project, so should
> > be moved into core. Sure, we can therefore sidestep any issues with
> > coprocessor API sufficiency for hosting security features. However, in my
> > opinion we should pursue both HBASE-11125 and HBASE-11127; the first to
> > provide the relative stability long asked for by coprocessor API users,
> the
> > latter to cleanly solve emerging issues with concurrency and versioning.
> >
> >
> > --
> > Best regards,
> >
> >   - Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
>
>


-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


[jira] [Created] (HBASE-11191) HBase ClusterId File Empty Check Loggic

2014-05-16 Thread sunjingtao (JIRA)
sunjingtao created HBASE-11191:
--

 Summary: HBase ClusterId File Empty Check Loggic
 Key: HBASE-11191
 URL: https://issues.apache.org/jira/browse/HBASE-11191
 Project: HBase
  Issue Type: Bug
 Environment: HBase 0.94+Hadoop2.2.0+Zookeeper3.4.5
Reporter: sunjingtao


if the clusterid file exists but empty ,then the following check logic in the 
MasterFileSystem.java has none effects.
if (!FSUtils.checkClusterIdExists(fs, rd, c.getInt(
HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000))) {
  FSUtils.setClusterId(fs, rd, UUID.randomUUID().toString(), c.getInt(
  HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000));
}
clusterId = FSUtils.getClusterId(fs, rd);
because the checkClusterIdExists method only check the path .
Path filePath = new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME);
return fs.exists(filePath);

in my case ,the file exists but is empty,so the readed clusterid is null which 
cause a nullPointerException:

java.lang.NullPointerException
at org.apache.hadoop.hbase.util.Bytes.toBytes(Bytes.java:441)
at 
org.apache.hadoop.hbase.zookeeper.ClusterId.setClusterId(ClusterId.java:72)
at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:581)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
at java.lang.Thread.run(Thread.java:745)

is this a bug?please make sure!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: On coprocessor API evolution

2014-05-16 Thread Andrew Purtell
On Fri, May 16, 2014 at 1:47 AM, Stack  wrote:

> I had just read a user mail on the phoenix list where someone thought that
> phoenix had been broken going from 0.98.1 to 0.98.2 (apparently its fine).
>
> Lets write up agreement.  We've talked this topic a bunch.
>
> 1. No guarantees minor version to minor version?  APIs can be broken any
> time.
> 2. API may change across minor versions but SHOULD not break existing users
> (No guarantees!)?
> 3. API may change across minor versions but WILL not break existing users?
>
> Any other permutations? (I'm good w/ #2 and #3.  #1 will just make it so no
> one will use CPs).
>

I would say #3 (WILL NOT) but with the very important detail that the
coprocessor API user must extend one of the abstract classes not using the
interface directly. As Vladimir points out we must have an abstract base
class for every interface to make that possible.

For 1.0 or at least as soon as possible (could be a major undertaking -
what say you all to getting it done?), I'd like to layer an API like that
proposed in HBASE-11125 where we can provide strong compatibility
guarantees at that layer, have the actual hooks implemented in a layer
below, and get away with quite weak guarantees in the lower layer to afford
us freedom to refactor.

-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


[jira] [Resolved] (HBASE-10768) hbase/bin/hbase-cleanup.sh has wrong usage string

2014-05-16 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez resolved HBASE-10768.
---

Resolution: Duplicate

Dup of HBASE-10769

> hbase/bin/hbase-cleanup.sh has wrong usage string
> -
>
> Key: HBASE-10768
> URL: https://issues.apache.org/jira/browse/HBASE-10768
> Project: HBase
>  Issue Type: Improvement
>  Components: Usability
>Affects Versions: 0.96.1, 0.98.1
>Reporter: Vamsee Yarlagadda
>Priority: Trivial
>
> Looks like hbase-cleanup,sh has wrong Usage string.
> https://github.com/apache/hbase/blob/trunk/bin/hbase-cleanup.sh#L34
> Current Usage string:
> {code}
> [systest@search-testing-c5-ncm-1 ~]$ echo 
> `/usr/lib/hbase/bin/hbase-cleanup.sh`
> Usage: hbase-cleanup.sh (zk|hdfs|all)
> {code}
> But ideally digging into the login of hbase-cleanup.sh, it should be modified 
> to
> {code}
> [systest@search-testing-c5-ncm-1 ~]$ echo 
> `/usr/lib/hbase/bin/hbase-cleanup.sh`
> Usage: hbase-cleanup.sh (--cleanZk|--cleanHdfs|--cleanAll)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)