Thank you James. I have filed https://issues.apache.org/jira/browse/PHOENIX-2514
Best regards,Sumit
From: James Taylor
To: "user@phoenix.apache.org" ; Sumit Nigam
Sent: Friday, December 11, 2015 9:06 AM
Subject: Re: Help with LIMIT clause
Thanks - most helpful would be a complete t
Thanks - most helpful would be a complete test case that reproduces it.
Would be helpful if you tried against 4.6 and/or master.
On Thursday, December 10, 2015, Sumit Nigam wrote:
> Thank you James.
>
> I am using Phoenix 4.5.1 with HBase-0.98.14.
>
> I am also noticing that if WHERE clause retu
Thank you James.
I am using Phoenix 4.5.1 with HBase-0.98.14.
I am also noticing that if WHERE clause returns a fewer number of records, then
ORDER BY with LIMIT works fine. Does this input help in any way?
I will file a CR.
Thanks again,Sumit
From: James Taylor
To: user ; Sumit Nigam
S
Slight correction to a couple of additional steps after truncating stats
table:
1) DELETE FROM SYSTEM.STATS; -- ensure autocommit is on which is the
default in sqlline
2) bounce your cluster (as we cache stats on server-side)
3) restart client (as we cache on client side as well)
4) reissue your qu
Zack - have you tried tuning the stats-related parameters yet? You can
start by just by truncating the stats table:
DELETE FROM SYSTEM.STATS;
If that solves the problem, then here's what you can do to prevent stats
from being generated:
- set phoenix.stats.guidepost.per.region to 1 on all region
Thanks Jain,
The 3800 seconds was just for the executeQUery()
I've seen it as high as 5900
From: Samarth Jain [samarth.j...@gmail.com]
Sent: Thursday, December 10, 2015 2:31 PM
To: user@phoenix.apache.org
Subject: Re: Help tuning for bursts of high traffic
Thanks for the additional information, Zack. Looking at the numbers it
looks like the bottle-neck is probably not coming from the phoenix thread
pool.
For request level metrics:
TASK_QUEUE_WAIT_TIME - represents the length of time (wall clock) phoenix
scans had to wait in the thread pool's queue
Hi Venu,
Do you mean that you'd connect to zookeeper and read from zNode from a
Phoenix UDF? That sounds dangerous as a UDF gets executed for every row
when used in a WHERE clause during scanning and filtering from a region
server.
Thanks,
James
On Thu, Dec 10, 2015 at 9:45 AM, Venu Madhav wrote:
Hey Nick,
I think this used to work, and will again once PHOENIX-2503 gets resolved.
With the Spark DataFrame support, all the necessary glue is there for
Phoenix and pyspark to play nice. With that client JAR (or by overriding
the com.fasterxml.jackson JARS), you can do something like:
df = sqlC
Hi Sumit,
I agree, these two queries should return the same result, as long as you
have the ORDER BY clause. What version of Phoenix are you using? What does
your DDL look like? Please file a JIRA that ideally includes a way of
reproducing the issue.
select current_timestamp from TBL order by curr
Hi guys,
I need to connect to zookeeper quorum and read data from zNode from a user
defined function.
I am able to connect by adding the zookeeper quorum and path of zNode in
hbase-site.xml (clinet side).Is there a way to load the zookeeper quorum
properties from hbase-site.xml (hbase configura
Thanks Jonathan,
I'm making some headway on getting a the client library working again. I
thought I saw a mention that you were using pyspark as well using the
DataFrame support. Are you able to confirm this works as well?
Thanks!
Josh
On Wed, Dec 9, 2015 at 7:51 PM, Cox, Jonathan A wrote:
>
In thinking a bit more about it, this should be a bug in Phoenix. This is
because even with LIMIT clause I have a order by timestamp DESC, which means
that column values MUST have been sorted prior to applying LIMIT clause. The
LIMIT should then give a MAX value in such a case. Also, surprisingl
Hi
We're in the process of upgrading from Phoenix 2.2.3 / HBase 0.96 to
Phoneix 4.4.0 / HBase 1.1.2 and wanted to know the simplest/easiest
way to copy data from old-to-new table.
The tables contain only a few hundred million rows so it's OK to
export locally and then upsert.
Cheers,
-Kristoffer
Hi,
The link for salted tables https://phoenix.apache.org/salted.html mentions
"Since salting table would not store the data sequentially, a strict sequential
scan would not return all the data in the natural sorted fashion. Clauses that
currently would force a sequential scan, for example, clau
15 matches
Mail list logo