t it, thanks.
> On 10/22/19 5:08 PM, jesse wrote:
> > It is properly restored, we double checked.
> >
> > We worked around the issue by restarting the query server.
> >
> > But it seems a bad bug.
> >
> >
> >
> >
> >
> > On Tu
quence in the restored table?
>
> On Fri, Oct 4, 2019 at 1:52 PM jesse wrote:
>
>> Let's say there is a running cluster A, with table:books and
>> system.sequence current value 5000, cache size 100, incremental is 1, the
>> latest book with sequence id:4800
>>
>> Now t
Let's say there is a running cluster A, with table:books and
system.sequence current value 5000, cache size 100, incremental is 1, the
latest book with sequence id:4800
Now the cluster A snapshot is backed up & restored into cluster b,
system.sequence and books table are properly restored, when
Josh, in your sample project pom.xml file, the following build dependence
is not needed:
org.apache.phoenix
phoenix-server-client
4.7.0-HBase-1.1-SNAPSHOT
On Thu, Sep 19, 2019, 10:53 AM jesse wrote:
> A) phoenix-4.14.2-HBase-1.4-thin-client.jar
>
> Just A) is good enough, Josh
, Sep 19, 2019, 8:54 AM jesse wrote:
> You confused me more, if I write a Java program with http endpoint to PQS
> for Phoenix read/write functions, should I depend on
>
> A) phoenix-4.14.2-HBase-1.4-thin-client.jar
>
> B) phoenix-queryserver-client-4.14.2-HBase-1.4.jar
>
>
a shaded jar is created, with the
> human-readable name "thin-client" to make it very clear to you that this
> is the jar the use.
>
> The Maven build shows how all of this work.
>
> On 9/18/19 8:04 PM, jesse wrote:
> > It seems it is just the sqllinewrap
It seems it is just the sqllinewrapper client, so confusing name...
On Wed, Sep 18, 2019, 4:46 PM jesse wrote:
> For query via PQS, we are using phoenix-4.14.2-HBase-1.4-thin-client.jar
>
> Then what is purpose and usage
> of phoenix-queryserver-client-4.14.2-HBase-1.4.jar?
>
> Thanks
>
For query via PQS, we are using phoenix-4.14.2-HBase-1.4-thin-client.jar
Then what is purpose and usage
of phoenix-queryserver-client-4.14.2-HBase-1.4.jar?
Thanks
f sync
>
> Below is the SQL to update table stats
> Update statistics table
> By default above executes asynchronously, hence it may take some time to
> update depending on table size
>
> On Tue 20 Aug, 2019, 6:34 AM jesse, wrote:
>
>> And the table is simple and ha
And the table is simple and has no index set up.
On Mon, Aug 19, 2019, 6:03 PM jesse wrote:
> we got some trouble, maybe someone could shed some light on this.
>
> Table has primary key c1, c2 and c3.
> Table is set with SALT_BUCKETS=12. Now it has 14 regions.
>
> The table ha
19, 2019, 5:33 PM James Taylor wrote:
> It’ll start with 12 regions, but those regions may split as they’re
> written to.
>
> On Mon, Aug 19, 2019 at 4:34 PM jesse wrote:
>
>> I have a table is SALT_BUCKETS = 12, but it has 14 regions, is this
>> right?
>>
>> Thanks
>>
>>
>>
I have a table is SALT_BUCKETS = 12, but it has 14 regions, is this right?
Thanks
Wed, Aug 7, 2019 at 9:14 PM jesse wrote:
>
>> Thank you all, very helpful information.
>>
>> 1) for server side ELB, what is the PQS health check url path?
>>
>> 2) Does Phoenix client library support client-side load-balancing? i. e
>> client gets list of PQ
er is using ZK quorum to get everything it needs
> > - If you need to balance traffic with multiply PQSs - then yes, but
> > again - it's up to you. It is not required multiply PQSs if you have
> > multiply HBase masters.
> >
> > On Wed, Aug 7, 2019 at 12:58 AM jesse &g
Our cluster used to have one hbase master, now a secondary is added. For
phonenix, what changes should we make?
- do we have to install new hbase libraries on the new hbase master node?
- do we need to install new query server on the hbase master?
- any configuration changes should we make?
- do
gt;
> Conclusion: With the current status of Phoenix, I would never use it again.
>
>
>
> Regards
>
> Martin
>
>
>
>
>
>
>
> *Von:* jesse [mailto:chat2je...@gmail.com]
> *Gesendet:* Samstag, 22. Juni 2019 20:04
> *An:* user@phoenix.apache.org
&g
I stumbled on this post:
https://medium.com/@vkrava4/fighting-with-apache-phoenix-secondary-indexes-163495bcb361
and the bug:
https://issues.apache.org/jira/browse/PHOENIX-5287
I had a similar very frustrating experience with Phoenix, In addition to
various performance issues, you can
It seems the write take a long time and the system substantially slows down
with requests.
however, hbase official doc mentions soft limit is 32mb.
TEM.STATS (as there are safeguards to prevent re-creating
> statistics too frequently).
>
> There have been some bugs in the past that results from invalid stats
> guideposts.
>
> On 6/19/19 3:25 PM, jesse wrote:
> > 1) hbase clone-snapshot into my_table
> > 2) sqlline
1) hbase clone-snapshot into my_table
2) sqlline.py zk:port console to create my_table.
Very straight forward.
On Wed, Jun 19, 2019, 11:40 AM anil gupta wrote:
> Sounds strange.
> What steps you followed to restore snapshot of Phoenix table?
>
> On Tue, Jun 18, 2019 at 9:34 PM
hi:
When my table is restored via hbase clone-snapshot,
1) sqlline.py console shows the proper number of records: select count(*)
from my_table.
2) select my_column from my_table limit 1 works fine.
However, select * from my_table limit 1; returns no row.
Do I need to perform some extra
Just have to make sure you don't have schema change during snapshots
On Fri, Feb 12, 2016 at 6:24 PM Gaurav Agarwal
wrote:
> We can take snapshot or restore snapshot from hbase of the phoenix tables.
> Export/import feature also hbase provide to us.
>
> Thanks
> On Feb
Yes, lots of ppl do, including folks at Salesforce. You need to setup your
own query tuning infra to make sure it runs ok
On Tue, Jan 26, 2016, 8:26 AM John Lilley wrote:
> Does anyone ever use Phoenix on standalone Hbase for production? Is it
> advisable?
>
>
>
>
I think he means its not a terribly expensive process - it is basically
just a fancy query proxy. If you are running a cluster any larger than 3
nodes you should seriously consider running at least a second or third
HMaster. When they are in standby mode they don't do very much - just watch
ZK for
Great post, awesome to see the optimization going in.
Would be cool to see if we could roll in some of the stuff talked about at
the last meetup too :)
On Sun, Nov 8, 2015, 11:27 AM James Taylor wrote:
> Thanks, Juan. I fixed the typo.
>
> On Sun, Nov 8, 2015 at 11:21
So HBase (and by extension, Phoenix) does not do true "streaming" of rows -
rows are copied into memory from the HFiles and then eventually copied
en-mass onto the wire. On the client they are pulled off in chunks and
paged through by the client scanner. You can control the batch size (amount
of
Correct. So you have to make sure that you have enough memory to handle the
fetchSize * concurrent requests.
On Tue, Oct 6, 2015 at 10:34 AM Sumit Nigam <sumit_o...@yahoo.com> wrote:
> Thanks Samarth and Jesse.
>
> So, in effect setting the batch size (say, stmt.setFetch
It looks like you are using two different metrics files on the classpath of
the server. You can only have one (quirk of Hadoop's metrics2 system). The
configurations for the phoenix sinks should be in the
hadoop-metrics2-hbase.properties
file since HBase will load the metrics system before the
And it looks like you already figured that out :)
On Tue, Jan 6, 2015, 9:43 AM Jesse Yates jesse.k.ya...@gmail.com wrote:
You wouldn't even need another table, just a single VARCHAR[] column to
keep the column names. Its ideal to keep it in the same row (possibly in
another cf if you expect
You absolutely can use snapshots with phoenix.
You would need to snapshot both the phoenix metadata table and the table
you want to snapshot.
Then on restore, you restore both those tables to new tables, point phoenix
there and get the data you need.
Missing pieces:
1) I'm not sure there is a
it!
---
Jesse Yates
@jesse_yates
jyates.github.com
On Mon, Sep 15, 2014 at 11:57 AM, Krishna research...@gmail.com wrote:
Hi, Is anyone aware of Phoenix meetups coming up in the next couple of
months in Bay Area?
Thanks
It looks like the connection string that the tracing module is using isn't
configured correctly. Is 2181 the client port on which you are running
zookeeper?
@James Taylor - phoenix can connect to multiple ZK nodes this way, right?
---
Jesse Yates
@jesse_yates
jyates.github.com
it in
QueryUtil, it gets:
server=rs1.example.com:,rs2.example.com:,rs3.example.com:,
rs4.example.com:,rs5.example.com:
port=
Which will not create a correct connection.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Wed, Sep 3, 2014 at 1:22 PM, Jeffrey
(really, just a conversion of a span to a Hadoop
metrics2 metric), it will create the table as needed.
Hope that helps!
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 26, 2014 at 7:21 PM, Dan Di Spaltro dan.dispal...@gmail.com
wrote:
I've used the concept of tracing
, means you should probably file a
jira.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 19, 2014 at 11:36 AM, Russell Jurney russell.jur...@gmail.com
wrote:
Thats really bad. That means... CDH 5.x can't run Phoenix? How can this be
fixed? I'm not sure what to do. We're
imagine this is also what various distributors are doing for their forks
as well.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 19, 2014 at 3:36 PM, Russell Jurney russell.jur...@gmail.com
wrote:
First of all, I apologize if you feel like I was picking on you. I
/zookeeper/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/zookeeper/lib/*:
*
this is the result i got for Hbase classpath command...and this is the
/opt/cloudera/parcels/CDH/lib/hbase/lib/ path i executed the code...
On Mon, Aug 11, 2014 at 9:29 PM, Jesse Yates jesse.k.ya
:03 AM, Saravanan A asarava...@alphaworkz.com wrote:
Hi Jesse,
I ran the following code to test the existence of the classes you asked me
to check. I initialized the two constants to the following values.
===
public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME
Setup - Removing Index Deadlocks (0.98.4+)). However, it should still be
fine to have in older versions.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Fri, Aug 8, 2014 at 2:18 AM, Saravanan A asarava...@alphaworkz.com
wrote:
This is my Hbase-site.xml file...
?xml version
39 matches
Mail list logo