3800 seconds was just for the executeQUery()
>>
>> I've seen it as high as 5900
>>
>> From: Samarth Jain [samarth.j...@gmail.com]
>> Sent: Thursday, December 10, 2015 2:31 PM
>> To: user@phoenix.apache.org
>> Subject: Re: Help tuning for bursts
_
> From: Samarth Jain [samarth.j...@gmail.com]
> Sent: Thursday, December 10, 2015 2:31 PM
> To: user@phoenix.apache.org
> Subject: Re: Help tuning for bursts of high traffic?
>
> Thanks for the additional information, Zack. Looking at the numbers it
> looks lik
.@apache.org>]
Sent: Wednesday, December 09, 2015 1:59 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Help tuning for bursts of high traffic?
Zack,
These stats are collected continuously and at the global client level. So
collecting them only when the query ta
io to make it faster?
>
>
>
> Right now, I’m only averaging about 5 queries/second, even though I’m
> querying by the primary key.
>
>
>
> Before I upgraded, I was getting a lot closer to 100.
>
>
>
> Thanks!
>
>
>
>
>
>
>
> *From:* Samarth Jain [
ER: 0, Number of
> samples: 0 | 0
>
> Number of times query failed | QUERY_FAILED_COUNTER: 0, Number of samples:
> 0 | 0
>
> Number of spool files created | SPOOL_FILE_COUNTER: 0, Number of samples:
> 0 | 0
>
>
>
> *From:* James Taylor [mailto:jamestay...@apache.org]
y if I run several instances in parallel. However, this is when
> I start to encounter the thread limit issue. So I’ll continue to experiment
> with this and I appreciate any feedback or recommendations this community
> can provide.
>
>
>
> Thanks!
>
>
>
>
>
> *From:*
ould be
> able to handle thousands of threads.
>
>
>
> Any ideas?
>
>
>
> *From:* Andrew Purtell [mailto:andrew.purt...@gmail.com]
> *Sent:* Friday, December 04, 2015 4:24 PM
> *To:* user@phoenix.apache.org
> *Cc:* Haisty, Geoffrey
> *Subject:* Re: Help tuning for
[mailto:andrew.purt...@gmail.com]
Sent: Friday, December 04, 2015 4:24 PM
To: user@phoenix.apache.org
Cc: Haisty, Geoffrey
Subject: Re: Help tuning for bursts of high traffic?
Any chance of stack dumps from the debug servlet? Impossible to get anywhere
with 'pegged the CPU' otherwise. Thanks.
On Dec 4
minutes to
>> execute (I’m guessing from the pattern that it’s not actually the query
>> that is slow, but a very long between when it gets queued and when it
>> actually gets executed).
>>
>>
>>
>> Oh and the methods you mentioned aren’t in my version of PhoenixR
Phoenix community.
From: Riesland, Zack
Sent: Friday, December 04, 2015 1:36 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Cc: geoff.hai...@sensus.com<mailto:geoff.hai...@sensus.com>
Subject: RE: Help tuning for bursts of high traffic?
Thanks, James
I'll work on gat
ing.
>
> Thanks for any further feedback you can provide on this. Hopefully the
> conversation is helpful to the whole Phoenix community.
>
> From: Riesland, Zack
> Sent: Friday, December 04, 2015 1:36 PM
> To: user@phoenix.apache.org
> Cc: geoff.hai...@sensus.com
> Su
ny further feedback you can provide on this. Hopefully the
conversation is helpful to the whole Phoenix community.
From: Riesland, Zack
Sent: Friday, December 04, 2015 1:36 PM
To: user@phoenix.apache.org
Cc: geoff.hai...@sensus.com
Subject: RE: Help tuning for bursts of high traffic?
Thanks, James
Help tuning for bursts of high traffic?
Zack,
Thanks for reporting this and for the detailed description. Here's a bunch of
questions and some things you can try in addition to what Andrew suggested:
1) Is this reproducible in a test environment (perhaps through Pherf:
https://phoenix.
widget, which returns hundreds-to-thousands of results per widget (per
>> query).
>>
>>
>>
>> Each query is a range scan, it’s just that I’m performing thousands of
>> them.
>>
>>
>>
>> *From:* Satish Iyengar [mailto:sat...@gmail.com ]
>&g
s. I’m looking up the history of
> each widget, which returns hundreds-to-thousands of results per widget (per
> query).
>
>
>
> Each query is a range scan, it’s just that I’m performing thousands of
> them.
>
>
>
> *From:* Satish Iyengar [mailto:sat...@gmail
om]
> Sent: Friday, December 04, 2015 9:43 AM
> To: user@phoenix.apache.org
> Subject: Re: Help tuning for bursts of high traffic?
>
> Hi Zack,
>
> Did you consider avoiding hitting hbase for every single row by doing that
> step in an offline mode? I was thinking if
...@gmail.com]
Sent: Friday, December 04, 2015 9:43 AM
To: user@phoenix.apache.org
Subject: Re: Help tuning for bursts of high traffic?
Hi Zack,
Did you consider avoiding hitting hbase for every single row by doing that step
in an offline mode? I was thinking if you could have some kind of daily export
of
Hi Zack,
Did you consider avoiding hitting hbase for every single row by doing that
step in an offline mode? I was thinking if you could have some kind of
daily export of hbase table and then use pig to perform join (co-group
perhaps) to do the same. Obviously this would work only when your hbase
SHORT EXPLANATION: a much higher percentage of queries to phoenix return
exceptionally slow after querying very heavily for several minutes.
LONGER EXPLANATION:
I've been using Pheonix for about a year as a data store for web-based
reporting tools and it works well.
Now, I'm trying to use the
19 matches
Mail list logo