Another good contribution would be to add this question to our FAQ.
On Tue, Dec 15, 2015 at 2:20 PM, Samarth Jain wrote:
> Kannan,
>
> See my response here:
>
>
Kannan,
See my response here:
https://mail-archives.apache.org/mod_mbox/phoenix-user/201509.mbox/%3CCAMfSBK+WKzd5EscXLJcn9nVpDYd66dH=nL=devdc9n_skww...@mail.gmail.com%3E
There is a JIRA in place https://issues.apache.org/jira/browse/PHOENIX-2388
to help pooling of phoenix connections. Would be a
Hi community,
Does Phoenix Spark support arbitrary SELECT statements for generating DF or
RDD?
>From this reading: https://phoenix.apache.org/phoenix_spark.html I am not
sure how to do that.
Thanks,
Li
Hi Li,
When using the DataFrame integration, it supports arbitrary SELECT
statements. Column pruning and predicate filtering is pushed down to
Phoenix, and aggregate functions are executed within Spark.
When using RDDs directly, you can specify a table name, columns and an
optional WHERE
Thanks Josh, I will take a look at the link.
- Li
On Tue, Dec 15, 2015 at 3:07 PM, Josh Mahonin wrote:
> Hi Li,
>
> When using the DataFrame integration, it supports arbitrary SELECT
> statements. Column pruning and predicate filtering is pushed down to
> Phoenix, and
We have a Phoenix table, all data was created by Phoenix SQL.
Normally, we use Phoenix SQL directly. But sometimes, we also need to
export data incrementally by data's created time.
Currently, we use HBase API directly by timestamps.
The new feature 'row_timestamp' seems to be more suitable for
Hi,
I recently started testing HBase/Phoenix as a storage solution for our
data. The problem is that I am not able to execute some "simple queries". I
am using phoenix 4.5.2 and hbase 1.0.0-cdh5.4.0. After creating a table and
making some selects (full scan), I started using local indexes to
Hi,
Should we pool the Phoenix JDBC connection like any other JDBC connection
(probably using DBCP or similar)? If not:
1. What's the reason behind not to pool?
2. What should be the access pattern?
a. Create the connection once and use the cached connection till the
Hi Nacho,
One solution is to include the columns that you like to query over, in your
index creation query.
This works for me:
0: jdbc:phoenix:localhost> create local index idx2 on test_table (col2) include
(col1);
2 rows affected (0.642 seconds)
0: jdbc:phoenix:localhost> explain select *
Hi Afshin,
I appreciate the answer, but I already took that into account in my first
message,
> I tried to include col1 in the index, and it works, but I expect to have
many columns, and the overhead is unaffordable.
and I also would want to know if this limitation is by design or if it is a
bug
10 matches
Mail list logo