The table not being on HBase is due the fact that I already had a local
index before.
I'm running it now using Async and the IndexTool.
Cheers
On Tue, Jan 19, 2016 at 12:51 AM, Pedro Gandola
wrote:
> Hi,
>
> I'm trying to create a new local index on *Phoenix4.4* over an existing
> table contai
Hi,
I'm trying to create a new local index on *Phoenix4.4* over an existing
table containing ~2 billion rows. During the process I get a some warnings
and then after ~1 hour I get an error from Phoenix.
My table has TTL defined and I'm assuming the warnings might be caused by
expired data.
*War
(in case you meant splits as salt size)
On Mon, Jan 18, 2016 at 2:19 PM, Matt Kowalczyk
wrote:
> For splits, I've used,
>
>
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java#L267
>
> To get a reference to a PTable,
>
>
> https://gith
For splits, I've used,
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java#L267
To get a reference to a PTable,
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java#L367
No
Number of regions, I presume? You can get this info using standard HBase
API.
-Vlad
On Mon, Jan 18, 2016 at 4:53 AM, Sumit Nigam wrote:
> Hi,
>
> Is there an easy way to know the number of splits a Phoenix table has?
> Preferably through JDBC metadata API?
>
> Thanks,
> Sumit
>
I'd recommend trying with and without the index with a representative data
set in Pherf, our performance testing tool:
https://phoenix.apache.org/pherf.html
Do a lot of your queries filter only on b (i.e. without any filter on a or
a1)? If so, that'll end up being a full table scan. One alternativ
Hi Zack,
The limitation of 32 HFiles is due to this configuration property
MAX_FILES_PER_REGION_PER_FAMILY
which defaults to 32 in LoadIncrementalHFiles.
You can give it a try updating your configuration with a larger value and
see if it works.
https://github.com/apache/hbase/blob/master/hba
Thanks James.
Do we need an index on a pk column, since it's a last element in the rowkey, to
speed up the query? ( because of this, write won't be impacted on that table)
we do leverage the call queue.
Kumar Palaniappan
> On Jan 18, 2016, at 10:07 AM, James Taylor wrote:
>
> See https:
See https://phoenix.apache.org/secondary_indexing.html
Hints are not required unless you want Phoenix to join between the index
and data table because the index isn't fully covered and some of these non
covered columns are referenced in the query.
bq. Doesnt a single global covered index sufficie
In the past, my struggles with hbase/phoenix have been related to data ingest.
Each night, we ingest lots of data via CsvBulkUpload.
After lots of trial and error trying to get our largest table to cooperate, I
found a primary key that distributes well if I specify the split criteria on
table c
Hi Willem,
Use Phoenix bulk load. I guess your source is csv so phoenixcsvbulk loader
can be used.
How frequently you want to load these files. If you can wait for certain
interval to merge these files and map reduce will bulk load to Phoenix
table.
Cheers
Pari
On 18-Jan-2016 4:17 pm, "Willem Co
We have a table
*create table t1 ( a bigint not null, a1 bigint not null, b bigint not
null, c varchar, d varchar pk_constraint "t1_pk" (a, a1,b))*
create a global indices as -
*create index id_t1 on t1 (b) include (a,a1,c,d)* - this one is to speed up
filtering on b since it is the last elemen
Hi,
Is there an easy way to know the number of splits a Phoenix table has?
Preferably through JDBC metadata API?
Thanks,Sumit
Hi Pari,
My comments in blue.
Few notes from my experience :
1. Use bulk load rather than psql.py. Load larger files(merge) instead of small
files.
Are you referring to native HBase bulk load or Phoenix MapReduce bulk load?
Unfortunately we can’t change how the files are received from source. M
Thanks for the prompt reply.
From: Pedro Gandola [mailto:pedro.gand...@gmail.com]
Sent: 15 January 2016 02:19 PM
To: user@phoenix.apache.org
Subject: Re: Telco HBase POC
Hi Willem,
Just to give you my short experience as phoenix user.
I'm using Phoenix4.4 on top of a HBase cluster where I keep 3
15 matches
Mail list logo