Hi Bryan,
Prior to the 4.2 release, if you want to delete rows from a table
declared as immutable, you need to drop the table (in which case the
index would be dropped as well). With 4.2 and above, the index of an
immutable table will be kept in sync when rows are deleted from the
data table with s
After using DELETE FROM TABLE_NAME; to purge data from a table, queries that
"FULL SCAN" TABLE_NAME still return matches against the DELETED data, but
queries that SCAN OVER indexes do not return the values. Basically, after a
DELETE, data is in the main table, but not in the indexes.
Any ideas
I think its to support the case when all columns of the table are part
of primary key (and are used to construct the rowkey).
On Tue, Mar 31, 2015 at 6:23 AM, Anirudha Khanna
wrote:
> Hi All,
>
> I am creating updatable views on a Phoenix table and inserting data into the
> table through the view
Hi All,
I am creating updatable views on a Phoenix table and inserting data into
the table through the views. When I inspect the table through the HBase
shell, I see an extra column there
*0:_0timestamp=1427807823285,
value=*
Just curious why is this colum
base-site.xml parameter look like this, is ok?
hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
phoenix.query.timeoutMs
600
Regards,
Ben Liang
> 在 2015年3月31日,17:11,丁桂涛(桂花) 写道:
>
> Add the following parameter to
Add the following parameter to the hbase-site.xml in the* phoenix bin
directory*.
phoenix.query.timeoutMs
600
On Tue, Mar 31, 2015 at 5:06 PM, 梁鹏程 wrote:
> Do you give me clearly steps to do ?
> I had been modified server-side hbase-site.xml hbase.rpc.timeout =
> 360 , but still
Do you give me clearly steps to do ?
I had been modified server-side hbase-site.xml hbase.rpc.timeout = 360 ,
but still print the same exception message.
Regards,
Ben Liang
> 在 2015年3月31日,15:23,Puneet Kumar Ojha 写道:
>
> As per below error Increase timeout from 6 to 600,000 in the
Hi to all,
I'd like to know which is the best way to read a key salted with phoenix.
If I read it during a mapreduce job I see a byte in front of the key
(probably the salted prefix) that I don't know how to handle..
Thanks in advance,
Flavio
Which version of Phoenix you are trying with?
Is it build from laster master branch on your own and tried with
hbase-1.0.0?
I think assignment of SYSTEM.CATALOG table regions might be failed.
You can check hbase master UI to see any regions in transition and
also It would be better to check HBase l
HI all,
i tried to install phoenix on my local laptop, but failed to connect to hbase
via phoenix, the connection was just hanged there.below are the infos for hbase
& hadoop.hadoop version: 2.4.6hbase version: 1.0.0both of them are installed in
pseudo-distributed mode.
below is the debug info
As per below error Increase timeout from 6 to 600,000 in the config
properties.
From: outlook_c086a38934715...@outlook.com
[mailto:outlook_c086a38934715...@outlook.com] On Behalf Of Ben Liang
Sent: Tuesday, March 31, 2015 12:29 PM
To: user@phoenix.apache.org
Subject: phoenix timeout exceptio
HI ALL,
I’m count the number of rows on hbase tables (dw.DM_T >= 29000
rows), sometimes succeed and sometimes fail, it's exception as follow:
could you help me to solve it?
thanks.
0: jdbc:phoenix:mvxl0490> select count(sales_id) from dw.DM_T;
+
12 matches
Mail list logo