Re: Storage Handler for Apache Hive

2018-03-27 Thread Jaanai
Hi , Stepan Migunov! The Phoenix 4.12.0-HBase-1.1 depends on hadoop 2.7.1 version, you shoud run it on hadoop of higher version.

Re: Null array elements with joins

2018-06-19 Thread Jaanai Zhang
what's your Phoenix's version? Yun Zhang Best regards! 2018-06-20 1:02 GMT+08:00 Tulasi Paradarami : > Hi, > > I'm running few tests against Phoenix array and running into this bug > where array elements return null values when a join is involved.

Re: AW: Duplicate Records Showing in Apache Phoenix

2018-06-20 Thread Jaanai Zhang
May some fields got incorrectly reflect after upgrading from 4.8 to 4.12, so it could not print all selected data. Yun Zhang Best regards! 2018-06-18 16:41 GMT+08:00 Azharuddin Shaikh : > Hi, > > We have upgraded the phoenix version to 4.12 from 4

Re: How to run Phoenix Secondary Index Coprocessor with Hbase?

2018-07-12 Thread Jaanai Zhang
I only use Phoenix API(JDBC API) to access HBASE If you want to use secondary indexes. Yun Zhang Best regards! 2018-07-12 20:08 GMT+08:00 alchemist : > I tried using Phoenix JDBC API to access data in a remote EMR server from > another EC2 machine

Re: How to support UPSERT with WHERE clause

2018-07-16 Thread Jaanai Zhang
Now Phoenix don not support where clause within upsert. I also think this function is very important that use frequency is highly. Maybe the dialect looks like this: Upset into scheme.table_name set col=‘x’ where id=‘x’ This semantic to implement is easy in Phoenix, we just need once write rpc

Re: Row Scan In HBase Not Working When Table Created With Phoenix

2018-07-29 Thread Jaanai Zhang
You must use schema to encode data if you want to use HBASE API, that means you need to use some Phoenix code. this way is not recommended if you are not developer, you can use SQL that is more convenient. Yun Zhang Best regards! 2018-07-29 1:41 G

Re: Spark-Phoenix Plugin

2018-08-05 Thread Jaanai Zhang
You can get data type from Phoenix meta, then encode/decode data to write/read data. I think this way is effective, FYI :) Yun Zhang Best regards! 2018-08-04 21:43 GMT+08:00 Brandon Geise : > Good morning, > > > > I’m looking at using a combinatio

Re: Spark-Phoenix Plugin

2018-08-06 Thread Jaanai Zhang
ing) .save() val end = System.currentTimeMillis() print("taken time:" + ((end - start) / 1000) + "s") } Yun Zhang Best regards! 2018-08-06 20:10 GMT+08:00 Brandon Geise : > Thanks for the reply Yun. > > > > I

Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Jaanai Zhang
Please ensure your Phoenix server was deployed and had resarted Yun Zhang Best regards! 2018-08-07 9:10 GMT+08:00 倪项菲 : > > Hi Experts, > I am using HBase 1.2.6,the cluster is working good with HMaster HA,but > when we integrate phoenix with h

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Jaanai Zhang
;t mention the phoenix server > > > > > 发件人: Jaanai Zhang > 时间: 2018/08/07(星期二)09:16 > 收件人: user ; > 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase > 1.2.6 > > Please ensure your Phoenix server was deployed and had resarted > > >

Re: Phoenix CsvBulkLoadTool fails with java.sql.SQLException: ERROR 103 (08004): Unable to establish connection

2018-08-20 Thread Jaanai Zhang
Caused by: java.lang.IllegalAccessError: class org.apache.hadoop.hdfs.web.HftpFileSystem cannot access its superinterface org.apache.hadoop.hdfs.web.TokenAspect$ TokenManagementDelegator This is the root cause, it seems that HBase 1.2 can't access interface of Hadoop 3.1, so you should consider d

Re: ABORTING region server and following HBase cluster "crash"

2018-09-10 Thread Jaanai Zhang
The root cause could not be got from log information lastly. The index might have been corrupted and it seems the action of aborting server still continue due to Index handler failures policy. Yun Zhang Best regards! Batyrshin Alexander <0x62...@gm

Re: Missing content in phoenix after writing from Spark

2018-09-12 Thread Jaanai Zhang
It seems columns data missing mapping information of the schema. if you want to use this way to write HBase table, you can create an HBase table and uses Phoenix mapping it. Jaanai Zhang Best regards! Thomas D'Silva 于2018年9月13日周四 上午6:03写道:

Re: Salting based on partial rowkeys

2018-09-13 Thread Jaanai Zhang
age engine can't support hash partition. ---- Jaanai Zhang Best regards! Gerald Sangudi 于2018年9月13日周四 下午11:32写道: > Hi folks, > > Any thoughts or feedback on this? > > Thanks, > Gerald > > On Mon, Sep 10, 2018 at 1:56 PM, Ge

Re: Encountering IllegalStateException while querying Phoenix

2018-09-19 Thread Jaanai Zhang
Are you sure you had restarted RS process? you can check "phoenix-server.jar" whether exists in the classpath of HBase by "jinfo" command ---- Jaanai Zhang Best regards! William Shen 于2018年9月20日周四 上午6:01写道: > For anyone else

Re: MutationState size is bigger than maximum allowed number of bytes

2018-09-19 Thread Jaanai Zhang
Are you configuring these on the server side? Your “UPSERT SELECT” grammar will be executed on the server side. Jaanai Zhang Best regards! Batyrshin Alexander <0x62...@gmail.com> 于2018年9月20日周四 上午7:48写道: > I've tried to copy one

Re: Phoenix 5.0 could not commit transaction: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase

2018-09-25 Thread Jaanai Zhang
Loader.defineClass(ClassLoader.java:763) > at > It looks like that HBase's Jars are incompatible. -------- Jaanai Zhang Best regards! Francis Chuang 于2018年9月25日周二 下午8:06写道: > Hi All, > > I recently updated one of my Go apps to use Phoenix

Re: Phoenix 5.0 could not commit transaction: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: org.apache.phoenix.hbase

2018-09-25 Thread Jaanai Zhang
> > > Is my method of installing HBase and Phoenix correct? > Did you check versions of HBase that exists in your classpath? Is this a compatibility issue with Guava? It isn't an exception which incompatible with Guava -------- Jaanai Zhan

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-29 Thread Jaanai Zhang
Did you restart the cluster and you should set 'hbase.hregion.max.filesize' to a safeguard value which less than RS's capabilities. ---- Jaanai Zhang Best regards! Batyrshin Alexander <0x62...@gmail.com> 于2018年9月29日周六 下午5:28写道: &g

Re: Concurrent phoenix queries throw unable to create new native thread error

2018-10-10 Thread Jaanai Zhang
ops.setProperty("phoenix.query.querySize", "4") Did you try to decrease values of the above configurations? Jaanai Zhang Best regards! Hemal Parekh 于2018年10月11日周四 上午1:18写道: > limits.conf has following which I thou

Re: Encountering BufferUnderflowException when querying from Phoenix

2018-10-14 Thread Jaanai Zhang
It looks a bug that the remained part greater than retrieved the length in ByteBuffer, Maybe the position of ByteBuffer or the length of target byte array exists some problems. Jaanai Zhang Best regards! William Shen 于2018年10月12日周五 下午11:53写道

Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63

2018-10-17 Thread Jaanai Zhang
. Jaanai Zhang Best regards! Tanvi Bhandari 于2018年10月17日周三 下午3:48写道: > @Shamvenk > > Yes I did check the STATS table from hbase shell, it's not empty. > > After dropping all SYSTEM tables and mapping hbase-tables to phoenix > tables by executing all DDLs

Re: Query logging - PHOENIX-2715

2018-11-20 Thread Jaanai Zhang
We can't capture some detail information about the DDL/DML operations by TRACE log. I suggest that you can print logs of these operations on the logic layer. ---- Jaanai Zhang Best regards! Curtis Howard 于2018年11月20日周二 上午11:20写道: > Hi, &g

Re: Query logging - PHOENIX-2715

2018-11-20 Thread Jaanai Zhang
yep! That configuration options do not exist. Jaanai Zhang Best regards! Curtis Howard 于2018年11月20日周二 下午11:42写道: > Hi Jaanai, > > Thanks for your suggestion. Just confirming then - it sounds like this > would involve adding cus

Re: HBase Compaction Fails for Phoenix Table

2018-11-21 Thread Jaanai Zhang
Can you please attach Create the table/indexes SQL? is this Phoneix's version? Are you sure many rows data had been corrupted or only this one row? ---- Jaanai Zhang Best regards! William Shen 于2018年11月21日周三 上午10:20写道: > Hi there, &

Re: Hbase vs Phienix column names

2018-12-10 Thread Jaanai Zhang
t( id varchar primary key, col varchar )COLUMN_ENCODED_BYTES =0 ; ---- Jaanai Zhang Best regards! Anil 于2018年12月11日周二 下午1:24写道: > HI, > > We have upgraded phoenix to Phoenix-4.11.0-cdh5.11.2 from phoenix 4.7. > > Problem - When

Re: "upsert select" with "limit" clause

2018-12-12 Thread Jaanai Zhang
Shawn, For the upsert without limit, which will read the source table and write the target tables on the server side. I think the higher memory usage is caused by using scan cache and memstore under the higher throughput. Jaanai Zhang Best

Re: "upsert select" with "limit" clause

2018-12-13 Thread Jaanai Zhang
eased. ---- Jaanai Zhang Best regards! Shawn Li 于2018年12月13日周四 下午12:10写道: > Hi Jaanai, > > Thanks for putting your thought. The behavior you describe is correct on > the Hbase region sever side. The memory usage for blockcache and memstore > will be high under such high

Re: "upsert select" with "limit" clause

2018-12-19 Thread Jaanai Zhang
hoenix's version? @Vincent FYI -------- Jaanai Zhang Best regards! Vincent Poon 于2018年12月20日周四 上午6:04写道: > Shawn, > > Took a quick look, I think what is happening is the UPSERT is done > serially when you have LIMIT. > Parallel scans

Re: Phoenix perform full scan and ignore covered global index

2018-12-23 Thread Jaanai Zhang
Could you please show your SQL of the CREATE TABLE/INDEX Jaanai Zhang Best regards! Batyrshin Alexander <0x62...@gmail.com> 于2018年12月23日周日 下午9:38写道: > Examples: > > 1. Ignoring indexes if "*" used for select even index i

Re: column mapping schema decoding

2018-12-26 Thread Jaanai Zhang
scenario, maybe use the original column is better. Jaanai Zhang Best regards! Shawn Li 于2018年12月27日周四 上午7:17写道: > Hi Pedro, > > Thanks for reply. Can you explain a little bit more? For example, if we > use COLUMN_ENCODED_BYTES =

Re: Phoenix JDBC Connection Warmup

2019-01-30 Thread Jaanai Zhang
to get regions information of the table Jaanai Zhang Best regards! William Shen 于2019年1月31日周四 下午1:37写道: > Hi there, > > I have a component that makes Phoenix queries via the Phoenix JDBC > Connection. I noticed that consistently,

Re: Phoenix JDBC Connection Warmup

2019-02-01 Thread Jaanai Zhang
workload. ---- Jaanai Zhang Best regards! William Shen 于2019年2月1日周五 上午2:09写道: > Thanks Jaanai. Do you know if that is expected only on the first query > against a table? For us, we experimented with issuing the same query > repeatedly, and we ob

Re: Arithmetic Error in a select query

2019-02-13 Thread Jaanai Zhang
sable the debug level. 2. Primary key on that table had a not null constraint and not sure why the > error was stating null? > This is an error of the server side, perhaps you can find some exceptions from the log files. -------- Jaanai Zhang Bes

Re: Arithmetic Error in a select query

2019-02-13 Thread Jaanai Zhang
sides). Jaanai Zhang Best regards! talluri abhishek 于2019年2月14日周四 上午11:23写道: > Thanks, Jaanai. > > For the second question, did you mean the region server logs? > > Also, I see that Phoenix has tracing features that we can e

Re: Query logging - PHOENIX-2715

2019-04-23 Thread Jaanai Zhang
Log level has four levels: OFF, INFO, DEBUG, and TRACE, the default value of log level is OFF. we can config by setting "phoenix.log.level" to log each query on the client side. ---- Jaanai Zhang Best regards! M. Aaron Bossert 于2019年4月20日周

Re: COALESCE Function Not Working With NULL Values

2019-05-14 Thread Jaanai Zhang
Hi, Jestan Now Phoenix 5.0.0 is not compatible with HBase 2.0.5, https://issues.apache.org/jira/browse/PHOENIX-5268 Jaanai Zhang Best regards! Jestan Nirojan 于2019年5月15日周三 上午5:04写道: > Hi William, > > Thanks, It is working with

Re: is Apache phoenix reliable enough?

2019-06-24 Thread Jaanai Zhang
functions, but some functions about analytics are inefficient under massive data scenarios. We have almost 200 clusters running Phoenix and solved many business requirements. Jaanai Zhang Best regards! Flavio Pompermaier 于2019年6月24日周一 下午5:30写道: >

Re: Is 200k data in a column a big concern?

2019-06-24 Thread Jaanai Zhang
could you please show your SQL? Which kind of requests are you said? Jaanai Zhang Best regards! jesse 于2019年6月21日周五 上午9:37写道: > It seems the write take a long time and the system substantially slows > down with requests. > > ho