Re: ABORTING region server and following HBase cluster "crash"

2018-11-02 Thread Vincent Poon
Indexes in Phoenix should not in theory cause any cluster outage.  An index
write failure should just disable the index, not cause a crash.
In practice, there have been some bugs around race conditions, the most
dangerous of which accidentally trigger a KillServerOnFailurePolicy which
then potentially cascades.
That policy is there for legacy reasons, I believe because at the time that
was the only way to keep indexes consistent - kill the RS and replay from
WAL.
There is now a partial rebuilder which detects when an index has been
disabled due to a write failure, and asynchronously attempts to rebuild the
index.  Killing the RS is supposed to be a last ditch effort only if the
index could not be disabled (because otherwise, your index is out of sync
but still active and your queries will return incorrect results).
PHOENIX-4977 made the policy configurable now.  If you would rather, in the
worst case, have your index potentially get out of sync instead of killing
RSs, you can set that to LeaveIndexActiveFailurePolicy.

On Fri, Nov 2, 2018 at 5:14 PM Neelesh  wrote:

> By no means am I judging Phoenix based on this. This is simply a design
> trade-off (scylladb goes the same route and builds global indexes). I
> appreciate all the effort that has gone in to Phoenix, and it was indeed a
> life saver. But the technical point remains that single node failures have
> potential to cascade to the entire cluster. That's the nature of global
> indexes, not specific to phoenix.
>
> I apologize if my response came off as dismissing phoenix altogether.
> FWIW, I'm a big advocate of phoenix at my org internally, albeit for the
> newer version.
>
>
> On Fri, Nov 2, 2018, 4:09 PM Josh Elser  wrote:
>
>> I would strongly disagree with the assertion that this is some
>> unavoidable problem. Yes, an inverted index is a data structure which,
>> by design, creates a hotspot (phrased another way, this is "data
>> locality").
>>
>> Lots of extremely smart individuals have spent a significant amount of
>> time and effort in stabilizing secondary indexes in the past 1-2 years,
>> not to mention others spending time on a local index implementation.
>> Judging Phoenix in its entirety based off of an arbitrarily old version
>> of Phoenix is disingenuous.
>>
>> On 11/2/18 2:00 PM, Neelesh wrote:
>> > I think this is an unavoidable problem in some sense, if global indexes
>> > are used. Essentially global indexes create a  graph of dependent
>> region
>> > servers due to index rpc calls from one RS to another. Any single
>> > failure is bound to affect the entire graph, which under reasonable
>> load
>> > becomes the entire HBase cluster. We had to drop global indexes just to
>> > keep the cluster running for more than a few days.
>> >
>> > I think Cassandra has local secondary indexes preciesly because of this
>> > issue. Last I checked there were significant pending improvements
>> > required for Phoenix local indexes, especially around read paths ( not
>> > utilizing primary key prefixes in secondary index reads where possible,
>> > for example)
>> >
>> >
>> > On Thu, Sep 13, 2018, 8:12 PM Jonathan Leech > > > wrote:
>> >
>> > This seems similar to a failure scenario I’ve seen a couple times. I
>> > believe after multiple restarts you got lucky and tables were
>> > brought up by Hbase in the correct order.
>> >
>> > What happens is some kind of semi-catastrophic failure where 1 or
>> > more region servers go down with edits that weren’t flushed, and are
>> > only in the WAL. These edits belong to regions whose tables have
>> > secondary indexes. Hbase wants to replay the WAL before bringing up
>> > the region server. Phoenix wants to talk to the index region during
>> > this, but can’t. It fails enough times then stops.
>> >
>> > The more region servers / tables / indexes affected, the more likely
>> > that a full restart will get stuck in a classic deadlock. A good
>> > old-fashioned data center outage is a great way to get started with
>> > this kind of problem. You might make some progress and get stuck
>> > again, or restart number N might get those index regions initialized
>> > before the main table.
>> >
>> > The sure fire way to recover a cluster in this condition is to
>> > strategically disable all the tables that are failing to come up.
>> > You can do this from the Hbase shell as long as the master is
>> > running. If I remember right, it’s a pain since the disable command
>> > will hang. You might need to disable a table, kill the shell,
>> > disable the next table, etc. Then restart. You’ll eventually have a
>> > cluster with all the region servers finally started, and a bunch of
>> > disabled regions. If you disabled index tables, enable one, wait for
>> > it to become available; eg its WAL edits will be replayed, then
>> > enable the associated main table and wait for it to come online. If
>

Re: ABORTING region server and following HBase cluster "crash"

2018-11-02 Thread Neelesh
By no means am I judging Phoenix based on this. This is simply a design
trade-off (scylladb goes the same route and builds global indexes). I
appreciate all the effort that has gone in to Phoenix, and it was indeed a
life saver. But the technical point remains that single node failures have
potential to cascade to the entire cluster. That's the nature of global
indexes, not specific to phoenix.

I apologize if my response came off as dismissing phoenix altogether. FWIW,
I'm a big advocate of phoenix at my org internally, albeit for the newer
version.


On Fri, Nov 2, 2018, 4:09 PM Josh Elser  wrote:

> I would strongly disagree with the assertion that this is some
> unavoidable problem. Yes, an inverted index is a data structure which,
> by design, creates a hotspot (phrased another way, this is "data
> locality").
>
> Lots of extremely smart individuals have spent a significant amount of
> time and effort in stabilizing secondary indexes in the past 1-2 years,
> not to mention others spending time on a local index implementation.
> Judging Phoenix in its entirety based off of an arbitrarily old version
> of Phoenix is disingenuous.
>
> On 11/2/18 2:00 PM, Neelesh wrote:
> > I think this is an unavoidable problem in some sense, if global indexes
> > are used. Essentially global indexes create a  graph of dependent region
> > servers due to index rpc calls from one RS to another. Any single
> > failure is bound to affect the entire graph, which under reasonable load
> > becomes the entire HBase cluster. We had to drop global indexes just to
> > keep the cluster running for more than a few days.
> >
> > I think Cassandra has local secondary indexes preciesly because of this
> > issue. Last I checked there were significant pending improvements
> > required for Phoenix local indexes, especially around read paths ( not
> > utilizing primary key prefixes in secondary index reads where possible,
> > for example)
> >
> >
> > On Thu, Sep 13, 2018, 8:12 PM Jonathan Leech  > > wrote:
> >
> > This seems similar to a failure scenario I’ve seen a couple times. I
> > believe after multiple restarts you got lucky and tables were
> > brought up by Hbase in the correct order.
> >
> > What happens is some kind of semi-catastrophic failure where 1 or
> > more region servers go down with edits that weren’t flushed, and are
> > only in the WAL. These edits belong to regions whose tables have
> > secondary indexes. Hbase wants to replay the WAL before bringing up
> > the region server. Phoenix wants to talk to the index region during
> > this, but can’t. It fails enough times then stops.
> >
> > The more region servers / tables / indexes affected, the more likely
> > that a full restart will get stuck in a classic deadlock. A good
> > old-fashioned data center outage is a great way to get started with
> > this kind of problem. You might make some progress and get stuck
> > again, or restart number N might get those index regions initialized
> > before the main table.
> >
> > The sure fire way to recover a cluster in this condition is to
> > strategically disable all the tables that are failing to come up.
> > You can do this from the Hbase shell as long as the master is
> > running. If I remember right, it’s a pain since the disable command
> > will hang. You might need to disable a table, kill the shell,
> > disable the next table, etc. Then restart. You’ll eventually have a
> > cluster with all the region servers finally started, and a bunch of
> > disabled regions. If you disabled index tables, enable one, wait for
> > it to become available; eg its WAL edits will be replayed, then
> > enable the associated main table and wait for it to come online. If
> > Hbase did it’s job without error, and your failure didn’t include
> > losing 4 disks at once, order will be restored. Lather, rinse,
> > repeat until everything is enabled and online.
> >
> >  A big enough failure sprinkled with a little bit of bad luck
> > and what seems to be a Phoenix flaw == deadlock trying to get HBASE
> > to start up. Fix by forcing the order that Hbase brings regions
> > online. Finally, never go full restart. 
> >
> >  > On Sep 10, 2018, at 7:30 PM, Batyrshin Alexander
> > <0x62...@gmail.com > wrote:
> >  >
> >  > After update web interface at Master show that every region
> > server now 1.4.7 and no RITS.
> >  >
> >  > Cluster recovered only when we restart all regions servers 4
> times...
> >  >
> >  >> On 11 Sep 2018, at 04:08, Josh Elser  > > wrote:
> >  >>
> >  >> Did you update the HBase jars on all RegionServers?
> >  >>
> >  >> Make sure that you have all of the Regions assigned (no RITs).
> > There could be a pretty simple explanation as to why the index can't
> > be written

Re: Python phoenixdb adapter and JSON serialization on PQS

2018-11-02 Thread Manoj Ganesan
Thanks Josh for the response!

I would definitely like to use protobuf serialization, but I'm observing
performance issues trying to run queries with a large number of results.
One problem is that I observe PQS runs out of memory, when its trying to
(what looks like to me) serialize the results in Avatica. The other is that
the phoenixdb python adapter itself spends a large amount of time in the
logic

where its converting the protobuf rows to python objects.

Interestingly when we use sqlline-thin.py instead of python phoenixdb, the
protobuf serialization works fine and responses are fast. It's not clear to
me why PQS would have problems when using the python adapter and not when
using sqlline-thin, do they follow different code paths (especially around
serialization)?

Thanks again,
Manoj

On Fri, Nov 2, 2018 at 4:05 PM Josh Elser  wrote:

> I would strongly suggest you do not use the JSON serialization.
>
> The JSON support is implemented via Jackson which has no means to make
> backwards compatibility "easy". On the contrast, protobuf makes this
> extremely easy and we have multiple examples over the past years where
> we've been able to fix bugs in a backwards compatible manner.
>
> If you want the thin client to continue to work across versions, stick
> with protobuf.
>
> On 11/2/18 5:27 PM, Manoj Ganesan wrote:
> > Hey everyone,
> >
> > I'm trying to use the Python phoenixdb adapter work with JSON
> > serialization on PQS.
> >
> > I'm using Phoenix 4.14 and the adapter works fine with protobuf, but
> > when I try making it work with an older version of phoenixdb (before the
> > JSON to protobuf switch was introduced), it just returns 0 rows. I don't
> > see anything in particular wrong with the HTTP requests itself, and they
> > seem to conform to the Avatica JSON spec
> > (http://calcite.apache.org/avatica/docs/json_reference.html).
> >
> > Here's the result (with some debug statements) that returns 0 rows.
> > Notice the *"firstFrame":{"offset":0,"done":true,"rows":[]* below:
> >
> > request body =  {"maxRowCount": -2, "connectionId":
> > "68c05d12-5770-47d6-b3e4-dba556db4790", "request": "prepareAndExecute",
> > "statementId": 3, "sql": "SELECT col1, col2 from table limit 20"}
> > request headers =  {'content-type': 'application/json'}
> > _post_request: got response {'fp':  > 0x7f858330b9d0>, 'status': 200, 'will_close': False, 'chunk_left':
> > 'UNKNOWN', 'length': 1395, 'strict': 0, 'reason': 'OK', 'version': 11,
> > 'debuglevel': 0, 'msg':  > 0x7f84fb50be18>, 'chunked': 0, '_method': 'POST'}
> > response.read(): body =
> >
> {"response":"executeResults","missingStatement":false,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"ip-10-55-6-247:8765"},"results":[{"response":"resultSet","connectionId":"68c05d12-5770-47d6-b3e4-dba556db4790","statementId":3,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable
> >
> ":0,"signed":true,"displaySize":40,"label":"COL1","columnName":"COL1","schemaName":"","precision":0,"scale":0,"tableName":"TABLE","catalogName":"","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":0,"signed":true,"displaySize":40,"label":"COL2","columnName":"COL2","schemaName":"","precision":0,"scale":0,"tableName":"TABLE","catalogName":"","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":null},*"firstFrame":{"offset":0,"done":true,"rows":[]*},"updateCount":-1,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"ip-10-55-6-247:8765"}}]}
>
> >
> >
> > The same query issued against a PQS started with PROTOBUF serialization
> > and using a newer phoenixdb adapter returns the correct number of rows.
> >
> > Has anyone had luck making this work?
> >
> > Thanks,
> > Manoj
> >
>


Re: ABORTING region server and following HBase cluster "crash"

2018-11-02 Thread Josh Elser
I would strongly disagree with the assertion that this is some 
unavoidable problem. Yes, an inverted index is a data structure which, 
by design, creates a hotspot (phrased another way, this is "data locality").


Lots of extremely smart individuals have spent a significant amount of 
time and effort in stabilizing secondary indexes in the past 1-2 years, 
not to mention others spending time on a local index implementation. 
Judging Phoenix in its entirety based off of an arbitrarily old version 
of Phoenix is disingenuous.


On 11/2/18 2:00 PM, Neelesh wrote:
I think this is an unavoidable problem in some sense, if global indexes 
are used. Essentially global indexes create a  graph of dependent region 
servers due to index rpc calls from one RS to another. Any single 
failure is bound to affect the entire graph, which under reasonable load 
becomes the entire HBase cluster. We had to drop global indexes just to 
keep the cluster running for more than a few days.


I think Cassandra has local secondary indexes preciesly because of this 
issue. Last I checked there were significant pending improvements 
required for Phoenix local indexes, especially around read paths ( not 
utilizing primary key prefixes in secondary index reads where possible, 
for example)



On Thu, Sep 13, 2018, 8:12 PM Jonathan Leech > wrote:


This seems similar to a failure scenario I’ve seen a couple times. I
believe after multiple restarts you got lucky and tables were
brought up by Hbase in the correct order.

What happens is some kind of semi-catastrophic failure where 1 or
more region servers go down with edits that weren’t flushed, and are
only in the WAL. These edits belong to regions whose tables have
secondary indexes. Hbase wants to replay the WAL before bringing up
the region server. Phoenix wants to talk to the index region during
this, but can’t. It fails enough times then stops.

The more region servers / tables / indexes affected, the more likely
that a full restart will get stuck in a classic deadlock. A good
old-fashioned data center outage is a great way to get started with
this kind of problem. You might make some progress and get stuck
again, or restart number N might get those index regions initialized
before the main table.

The sure fire way to recover a cluster in this condition is to
strategically disable all the tables that are failing to come up.
You can do this from the Hbase shell as long as the master is
running. If I remember right, it’s a pain since the disable command
will hang. You might need to disable a table, kill the shell,
disable the next table, etc. Then restart. You’ll eventually have a
cluster with all the region servers finally started, and a bunch of
disabled regions. If you disabled index tables, enable one, wait for
it to become available; eg its WAL edits will be replayed, then
enable the associated main table and wait for it to come online. If
Hbase did it’s job without error, and your failure didn’t include
losing 4 disks at once, order will be restored. Lather, rinse,
repeat until everything is enabled and online.

 A big enough failure sprinkled with a little bit of bad luck
and what seems to be a Phoenix flaw == deadlock trying to get HBASE
to start up. Fix by forcing the order that Hbase brings regions
online. Finally, never go full restart. 

 > On Sep 10, 2018, at 7:30 PM, Batyrshin Alexander
<0x62...@gmail.com > wrote:
 >
 > After update web interface at Master show that every region
server now 1.4.7 and no RITS.
 >
 > Cluster recovered only when we restart all regions servers 4 times...
 >
 >> On 11 Sep 2018, at 04:08, Josh Elser mailto:els...@apache.org>> wrote:
 >>
 >> Did you update the HBase jars on all RegionServers?
 >>
 >> Make sure that you have all of the Regions assigned (no RITs).
There could be a pretty simple explanation as to why the index can't
be written to.
 >>
 >>> On 9/9/18 3:46 PM, Batyrshin Alexander wrote:
 >>> Correct me if im wrong.
 >>> But looks like if you have A and B region server that has index
and primary table then possible situation like this.
 >>> A and B under writes on table with indexes
 >>> A - crash
 >>> B failed on index update because A is not operating then B
starting aborting
 >>> A after restart try to rebuild index from WAL but B at this
time is aborting then A starting aborting too
 >>> From this moment nothing happens (0 requests to region servers)
and A and B is not responsible from Master-status web interface
  On 9 Sep 2018, at 04:38, Batyrshin Alexander
<0x62...@gmail.com 
>> wrote:
 
  After update we still can't recover HBase c

Re: Python phoenixdb adapter and JSON serialization on PQS

2018-11-02 Thread Josh Elser

I would strongly suggest you do not use the JSON serialization.

The JSON support is implemented via Jackson which has no means to make 
backwards compatibility "easy". On the contrast, protobuf makes this 
extremely easy and we have multiple examples over the past years where 
we've been able to fix bugs in a backwards compatible manner.


If you want the thin client to continue to work across versions, stick 
with protobuf.


On 11/2/18 5:27 PM, Manoj Ganesan wrote:

Hey everyone,

I'm trying to use the Python phoenixdb adapter work with JSON 
serialization on PQS.


I'm using Phoenix 4.14 and the adapter works fine with protobuf, but 
when I try making it work with an older version of phoenixdb (before the 
JSON to protobuf switch was introduced), it just returns 0 rows. I don't 
see anything in particular wrong with the HTTP requests itself, and they 
seem to conform to the Avatica JSON spec 
(http://calcite.apache.org/avatica/docs/json_reference.html).


Here's the result (with some debug statements) that returns 0 rows. 
Notice the *"firstFrame":{"offset":0,"done":true,"rows":[]* below:


request body =  {"maxRowCount": -2, "connectionId": 
"68c05d12-5770-47d6-b3e4-dba556db4790", "request": "prepareAndExecute", 
"statementId": 3, "sql": "SELECT col1, col2 from table limit 20"}

request headers =  {'content-type': 'application/json'}
_post_request: got response {'fp': 0x7f858330b9d0>, 'status': 200, 'will_close': False, 'chunk_left': 
'UNKNOWN', 'length': 1395, 'strict': 0, 'reason': 'OK', 'version': 11, 
'debuglevel': 0, 'msg': 0x7f84fb50be18>, 'chunked': 0, '_method': 'POST'}
response.read(): body =  
{"response":"executeResults","missingStatement":false,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"ip-10-55-6-247:8765"},"results":[{"response":"resultSet","connectionId":"68c05d12-5770-47d6-b3e4-dba556db4790","statementId":3,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable
":0,"signed":true,"displaySize":40,"label":"COL1","columnName":"COL1","schemaName":"","precision":0,"scale":0,"tableName":"TABLE","catalogName":"","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":0,"signed":true,"displaySize":40,"label":"COL2","columnName":"COL2","schemaName":"","precision":0,"scale":0,"tableName":"TABLE","catalogName":"","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":null},*"firstFrame":{"offset":0,"done":true,"rows":[]*},"updateCount":-1,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"ip-10-55-6-247:8765"}}]} 



The same query issued against a PQS started with PROTOBUF serialization 
and using a newer phoenixdb adapter returns the correct number of rows.


Has anyone had luck making this work?

Thanks,
Manoj



Python phoenixdb adapter and JSON serialization on PQS

2018-11-02 Thread Manoj Ganesan
Hey everyone,

I'm trying to use the Python phoenixdb adapter work with JSON serialization
on PQS.

I'm using Phoenix 4.14 and the adapter works fine with protobuf, but when I
try making it work with an older version of phoenixdb (before the JSON to
protobuf switch was introduced), it just returns 0 rows. I don't see
anything in particular wrong with the HTTP requests itself, and they seem
to conform to the Avatica JSON spec (
http://calcite.apache.org/avatica/docs/json_reference.html).

Here's the result (with some debug statements) that returns 0 rows.
Notice the *"firstFrame":{"offset":0,"done":true,"rows":[]* below:

request body =  {"maxRowCount": -2, "connectionId":
"68c05d12-5770-47d6-b3e4-dba556db4790", "request": "prepareAndExecute",
"statementId": 3, "sql": "SELECT col1, col2 from table limit 20"}
request headers =  {'content-type': 'application/json'}
_post_request: got response {'fp': , 'status': 200, 'will_close': False, 'chunk_left':
'UNKNOWN', 'length': 1395, 'strict': 0, 'reason': 'OK', 'version': 11,
'debuglevel': 0, 'msg': ,
'chunked': 0, '_method': 'POST'}
response.read(): body =
{"response":"executeResults","missingStatement":false,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"ip-10-55-6-247:8765"},"results":[{"response":"resultSet","connectionId":"68c05d12-5770-47d6-b3e4-dba556db4790","statementId":3,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable
":0,"signed":true,"displaySize":40,"label":"COL1","columnName":"COL1","schemaName":"","precision":0,"scale":0,"tableName":"TABLE","catalogName":"","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":0,"signed":true,"displaySize":40,"label":"COL2","columnName":"COL2","schemaName":"","precision":0,"scale":0,"tableName":"TABLE","catalogName":"","type":{"type":"scalar","id":4,"name":"INTEGER","rep":"PRIMITIVE_INT"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Integer"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":null},
*"firstFrame":{"offset":0,"done":true,"rows":[]*
},"updateCount":-1,"rpcMetadata":{"response":"rpcMetadata","serverAddress":"ip-10-55-6-247:8765"}}]}

The same query issued against a PQS started with PROTOBUF serialization and
using a newer phoenixdb adapter returns the correct number of rows.

Has anyone had luck making this work?

Thanks,
Manoj


Re: ABORTING region server and following HBase cluster "crash"

2018-11-02 Thread Neelesh
I think this is an unavoidable problem in some sense, if global indexes are
used. Essentially global indexes create a  graph of dependent region
servers due to index rpc calls from one RS to another. Any single failure
is bound to affect the entire graph, which under reasonable load becomes
the entire HBase cluster. We had to drop global indexes just to keep the
cluster running for more than a few days.

I think Cassandra has local secondary indexes preciesly because of this
issue. Last I checked there were significant pending improvements required
for Phoenix local indexes, especially around read paths ( not utilizing
primary key prefixes in secondary index reads where possible, for example)


On Thu, Sep 13, 2018, 8:12 PM Jonathan Leech  wrote:

> This seems similar to a failure scenario I’ve seen a couple times. I
> believe after multiple restarts you got lucky and tables were brought up by
> Hbase in the correct order.
>
> What happens is some kind of semi-catastrophic failure where 1 or more
> region servers go down with edits that weren’t flushed, and are only in the
> WAL. These edits belong to regions whose tables have secondary indexes.
> Hbase wants to replay the WAL before bringing up the region server. Phoenix
> wants to talk to the index region during this, but can’t. It fails enough
> times then stops.
>
> The more region servers / tables / indexes affected, the more likely that
> a full restart will get stuck in a classic deadlock. A good old-fashioned
> data center outage is a great way to get started with this kind of problem.
> You might make some progress and get stuck again, or restart number N might
> get those index regions initialized before the main table.
>
> The sure fire way to recover a cluster in this condition is to
> strategically disable all the tables that are failing to come up. You can
> do this from the Hbase shell as long as the master is running. If I
> remember right, it’s a pain since the disable command will hang. You might
> need to disable a table, kill the shell, disable the next table, etc. Then
> restart. You’ll eventually have a cluster with all the region servers
> finally started, and a bunch of disabled regions. If you disabled index
> tables, enable one, wait for it to become available; eg its WAL edits will
> be replayed, then enable the associated main table and wait for it to come
> online. If Hbase did it’s job without error, and your failure didn’t
> include losing 4 disks at once, order will be restored. Lather, rinse,
> repeat until everything is enabled and online.
>
>  A big enough failure sprinkled with a little bit of bad luck and
> what seems to be a Phoenix flaw == deadlock trying to get HBASE to start
> up. Fix by forcing the order that Hbase brings regions online. Finally,
> never go full restart. 
>
> > On Sep 10, 2018, at 7:30 PM, Batyrshin Alexander <0x62...@gmail.com>
> wrote:
> >
> > After update web interface at Master show that every region server now
> 1.4.7 and no RITS.
> >
> > Cluster recovered only when we restart all regions servers 4 times...
> >
> >> On 11 Sep 2018, at 04:08, Josh Elser  wrote:
> >>
> >> Did you update the HBase jars on all RegionServers?
> >>
> >> Make sure that you have all of the Regions assigned (no RITs). There
> could be a pretty simple explanation as to why the index can't be written
> to.
> >>
> >>> On 9/9/18 3:46 PM, Batyrshin Alexander wrote:
> >>> Correct me if im wrong.
> >>> But looks like if you have A and B region server that has index and
> primary table then possible situation like this.
> >>> A and B under writes on table with indexes
> >>> A - crash
> >>> B failed on index update because A is not operating then B starting
> aborting
> >>> A after restart try to rebuild index from WAL but B at this time is
> aborting then A starting aborting too
> >>> From this moment nothing happens (0 requests to region servers) and A
> and B is not responsible from Master-status web interface
>  On 9 Sep 2018, at 04:38, Batyrshin Alexander <0x62...@gmail.com
> > wrote:
> 
>  After update we still can't recover HBase cluster. Our region servers
> ABORTING over and over:
> 
>  prod003:
>  Sep 09 02:51:27 prod003 hbase[1440]: 2018-09-09 02:51:27,395 FATAL
> [RpcServer.default.FPBQ.Fifo.handler=92,queue=2,port=60020]
> regionserver.HRegionServer: ABORTING region server
> prod003,60020,1536446665703: Could not update the index table, killing
> server region because couldn't write to an index table
>  Sep 09 02:51:27 prod003 hbase[1440]: 2018-09-09 02:51:27,395 FATAL
> [RpcServer.default.FPBQ.Fifo.handler=77,queue=7,port=60020]
> regionserver.HRegionServer: ABORTING region server
> prod003,60020,1536446665703: Could not update the index table, killing
> server region because couldn't write to an index table
>  Sep 09 02:52:19 prod003 hbase[1440]: 2018-09-09 02:52:19,224 FATAL
> [RpcServer.default.FPBQ.Fifo.handler=82,queue=2,port=60020]
> regionserver.HReg

Re: Phoenix Performances & Uses Cases

2018-11-02 Thread Neelesh
Another observation with Phoenix global indexes - at very large volumes of
writes, a single region server failure cascades to the entire cluster very
quickly

On Sat, Oct 27, 2018, 4:50 AM Nicolas Paris 
wrote:

> Hi
>
> I am benchmarking phoenix to better understand its strength and
> weaknesses. My basis is to compare to postgresql for OLTP workload and
> hive llap for OLAP workload. I am testing on a 10 computer cluster
> instance with hive (2.1) and phoenix (4.8)  220 GO RAM/32CPU versus a
> postgresql (9.6) 128GO RAM 32CPU.
>
> Right now, my opinion is:
> - when getting a subset on a large table, phoenix performs the
>   best
> - when getting a subset from multiple large tables, postgres performs
>   the best
> - when getting a subset from a large table joining one to many small
>   table, phoenix performs the best
> - when ingesting high frequency data, Phoenix performs the best
> - when grouping by query, hive > postgresql > phoenix
> - when windowning, transforming, grouping, hive performs the best,
>   phoenix the worst
>
> Finally, my conclusion is  phoenix is not intended at all for analytics
> queries such grouping, windowing, and joining large tables. It suits
> well for very specific use case like maintaining a very large table with
> eventually small tables to join with (such timeseries data, or binary
> storage data with hbase MOB enabled).
>
> Am I missing something ?
>
> Thanks,
>
> --
> nicolas
>