Hi, Brady,
Did you build phoenix against your cloudera cluster ? Cause I cannot build
phoenix 4.3 release against
0.98.6 hbase version, with some compiling issues of local index splitter. So
maybe you want to do upgrade
with hbase version 0.98.9+.
Thanks,
Sun.
CertusNet
From: Brady,
Hi, all
With the latest 4.3 release, I got strange error of incompatible jars for
client and server, as following:
Exception in thread main java.sql.SQLException: ERROR 2006 (INT08):
Incompatible jars detected between client and server. Ensure that phoenix.jar
is put on the classpath of
Very thanks, James. Would scan that ASAP.
Regards,
Sun.
CertusNet
From: James Taylor
Date: 2015-02-26 09:42
To: su...@certusnet.com.cn; user
Subject: Re: Re: [ANNOUNCE] Apache Phoenix meetup in SF on Tue, Feb 24th
Hi Sun,
Take a look at the files tab of the Meetup page and you'll find our
Hi, Mike
You are connecting to a remote hbase cluster, aren't you? Can you ping the
master node from localhost?
Seems like some net connection exception and I think you may check that.
Thanks,
Sun.
CertusNet
From: Mike Friedman
Date: 2015-02-06 03:53
To: user
Subject: testing problem
Hi,
Hi, Kevin,
I think that should be work cause this parameter is a client-side config. After
you change the default
to 90, did you see that the exception info changed accordingly?
Thanks,
Sun.
CertusNet
From: Kevin Verhoeven
Date: 2015-02-05 09:10
To: user@phoenix.apache.org
Subject:
Hi,
Referencing this link for Date type :
http://phoenix.apache.org/language/datatypes.html#date_type
Thanks,
Sun.
CertusNet
From: Siva
Date: 2015-02-05 07:49
To: user
Subject: Fwd: Bulk loading error
Hi Everyone,
Encountered below error while bulk loading the data. Can you let me know
number of slots like a LIMIT 10
query might run but others won't.
Hope this helps.
--Jan
On Wed, Jan 28, 2015 at 6:44 PM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
Hi, all
I got strange exception hints when doing query like SELECT COUNT(*) FROM
SOME_TABLE; through sqlline
://issues.apache.org/jira/browse/PHOENIX-1596
Thanks,
Samarth
On Mon, Jan 26, 2015 at 12:19 AM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
Hi,all
Recently we had got super weird things for phoenix tracing. With phoenix 4.2.2
on hbase 0.98.6-cdh5.2.0,
we set phoenix.trace.frequency
Hi, all
I got strange exception hints when doing query like SELECT COUNT(*) FROM
SOME_TABLE; through sqlline and the info is
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
Task org.apache.phoenix.job.JobManager$JobFutureTask@f55dfff rejected from
Hi,all
Recently we had got super weird things for phoenix tracing. With phoenix 4.2.2
on hbase 0.98.6-cdh5.2.0,
we set phoenix.trace.frequency to always and execute queries.
We are loading data into phoenix tables via mapreduce framework and got jvm
memory leak with higher rate
full gc. The
Hi, all
When counting on large table, we got the following exception
org.apache.hadoop.hbase.ipc.RpcClient$CallTimeoutException: Call id=,
waitTime=69714 rpcTimetout=6
How would that be resolved? Table size goes to 17.3G with issuing hdfs dfs -du.
Table with 90+ columns
and only one
if I am wrong.
Regards,
Sun.
CertusNet
From: Nick Dimiduk
Date: 2015-01-14 02:50
To: user
Subject: Re: MapReduce bulk load into Phoenix table
On Tue, Jan 13, 2015 at 1:29 AM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
As far as I know, bulk loading into phoenix or hbase may
Hi, Constantin
You can try to use Apache Spark to complete the mapreduce bulkload job. As far
as I know, bulkloading into
phoenix or hbase may be affected by several conditions, like wal enabled or
numbers of split regions. And your hbase
or phoenix configuration parameter may also influence the
is splitted in multiple region.
Thanks , Kunal
On Tue, Jan 13, 2015 at 6:40 AM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
Hi, Kunal
If you want to know the disk usage of table in Phoenix, you can definitely
search for the hbase table size
that stored on HDFS. So you can issue command
are
you scanning? Are you using multiple column families? We should be able to help
tune things to improve #1.
Thanks,
James
On Monday, January 5, 2015, su...@certusnet.com.cn su...@certusnet.com.cn
wrote:
We had firstly done the test using #1 and the result didnot satisfy our
expectation
Hi,
spark-phoenix integration would be great as Spark community is greately active
now and more
and more developers are using Apache Spark.
Thanks,
Sun.
From: James Taylor
Date: 2015-01-07 16:10
To: su...@certusnet.com.cn
Subject: Re: Fwd: Phoenix in production
This is great, Sun! Thank
Hi,
Glad to share our experience of using Phoenix in Production. I believe that
Siddharth had done
sufficient tests and practices about Phoenix performance. Here are some tips
about how we are using
Phoenix for our projects:
1. We facilitate Phoenix to give convinience for both RD and QA
7, 2015 at 12:17 AM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
Hi,
spark-phoenix integration would be great as Spark community is greately active
now and more
and more developers are using Apache Spark.
Thanks,
Sun.
From: James Taylor
Date: 2015-01-07 16:10
To: su
Hi,all
Currently we are using Phoenix to store and query large datasets of KPI for our
projects. Noting that we definitely need
to do full table scan of phoneix KPI tables for data statistics and summary
collection, e.g. from five minutes data table to
summary hour based data table, and to day
Hi,all
When trying to complete the mapreduce job over phoenix table using Apache
Spark, we got the following error. Guess that it is caused by hbase client
scanner timeout exeption?
Do we need to configure something for hbase-site.xml?Thanks for applying any
available advice.
By the way,
Hi,
You may need to add hbase-protocol-*.jar to the HADOOP_CLASSPATH
Just following the commands below:
HADOOP_CLASSPATH=/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hbase/hbase-protocol.jar:/etc/hbase/conf
hadoop jar ...
Thanks,
Sun
CertusNet
发件人: 乔凯
发送时间: 2014-12-19 15:40
Hi,all
Just want to confirm the appropriate versions of phoenix for compatibility to
hbase 0.96.1.1-cdh5.0.1.
Are the latest 4.2.2 release working well for the hbase version?
Thanks,
Sun.
CertusNet
, 2014 at 10:31 PM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
Hi,all
Just want to confirm the appropriate versions of phoenix for compatibility
to hbase 0.96.1.1-cdh5.0.1.
Are the latest 4.2.2 release working well for the hbase version?
Thanks,
Sun
Option A is fine .
Thanks,
Sun.
CertusNet
From: Justin Workman
Date: 2014-12-11 12:00
To: d...@phoenix.apache.org
CC: user
Subject: Re: are you using Phoenix 3.x?
Likewise. +1 for option A.
Justin
Sent from my iPhone
On Dec 10, 2014, at 8:59 PM, Christopher Tarnas
Hi,
Perhaps there are some problems for your hbase cluster. Would you mind check
that you can
smoothly access the hbase meta table ? You can check the web ui status.
Thanks,
Sun.
CertusNet
发件人: 乔凯
发送时间: 2014-12-08 12:56
收件人: user
主题: Table undefined.
hi,all:
when I install the
with derived tables
in from clause
Hi Sun,
Which version of Phoenix are you using? This feature is supported from 3.1 and
4.1. And there is no such error message in Phoenix code base now.
Thanks,
Maryann
On Fri, Dec 5, 2014 at 3:16 AM, su...@certusnet.com.cn su...@certusnet.com.cn
wrote:
Hi
...@certusnet.com.cn su...@certusnet.com.cn
wrote:
Hi,all
Notice that PHOENIX-136 has already supported derived tables in from clause,
however,
aggregate queries would throw error like the following:
Error: Complex nested queries not supported. (state=,code=0)
The example queries are like : SELECT
Hi,
The main cause is that you put inappropriate props key value for the parameter.
What properties would you expect to utilize at the phoenix connection time ?
Thanks,
Sun
CertusNet
发件人: chenwenhui
发送时间: 2014-12-04 13:49
收件人: user
主题: Phoenix4.2.1 against HBase0.98.6 encountered a
Hi,guys
Just curious about the possible support for UDSs grammer like create function...
Thanks,
Sun
CertusNet
of HBase and Phoenix?
Thanks,
Rajeshbabu.
On Mon, Nov 24, 2014 at 12:37 PM, su...@certusnet.com.cn
su...@certusnet.com.cn wrote:
Hi,all
When creating data table with specifying DEFAULT_COLUMN_FAMILY='F', and then
trying to create
local index on the table specifying column, we got the following
Hi, all,
I just wonder if phoenix has such support cause we are considering to
utilize phoenix instead of mysql infobright
in our projects.
Moreover, does phoenix supports user-defined functions? If so, how to use
them?
Best thanks,
Sun.
CertusNet
doesn't works?
and is my SQL incorrect?
thank you for your kind!
xuxc
-- 原始邮件 --
发件人: su...@certusnet.com.cn;su...@certusnet.com.cn;
发送时间: 2014年10月24日(星期五) 上午10:47
收件人: useruser@phoenix.apache.org;
主题: Re: 回复
query?
-- 原始邮件 --
发件人: su...@certusnet.com.cn;su...@certusnet.com.cn;
发送时间: 2014年10月24日(星期五) 上午9:00
收件人: useruser@phoenix.apache.org;
主题: Re: Re: how to use phoenix index to query data?
You may need to confirm what kind of your index type. global index or local
rows ,it will takes 10sec , as long as use Filter directly in
hbase shell.
i 'd rather know something wrong i have done.
-- 原始邮件 --
发件人: su...@certusnet.com.cn;su...@certusnet.com.cn;
发送时间: 2014年10月24日(星期五) 上午10:09
收件人: useruser@phoenix.apache.org;
主题: Re: 回复
Maybe running a program to modify your CSV file to replace any SEMICOLON with
COMMA shall be more convinient.
From: arthur.hk.c...@gmail.com
Date: 2014-10-09 11:26
To: user
CC: arthur.hk.c...@gmail.com
Subject: How to change default field delimiter from COMMA to SEMICOLON
Hi,
My CSV
Hi all,
I am trying to facilitate tracing according to the instructions here. Here are
my several operations:
1. copy the phoenix-hadoop2-compat/bin/ attributes files into my hbase
classpath($HBASE_HOME/conf)
2. modify hbase-site.xml and adding the following properties:
property
and delete it!
From: su...@certusnet.com.cn [su...@certusnet.com.cn]
Sent: Tuesday, September 02, 2014 8:27 AM
To: user
Subject: Re: Re: Unable to find cached index metadata
Hi,
Thanks for your reply. Sorry for not completely describing my job
information.
I had configured the properties
Hi all,
I got the v4.1.0-rc0 phoenix release from here:
https://github.com/apache/phoenix/releases
while trying to facilitate the tracing ( http://phoenix.apache.org/tracing.html
).
However, I got really confused from the instructions. For example, the
configuration here
as the following
Try to utilize appropriate phoenix-[version]-client.jar for your hbase cluster.
If sqlline works smoothly, try to rebuild phoenix and get the client jar.
CertusNet
赛特斯信息科技股份有限公司
孙福林
Add: 江苏省南京市玄武区玄武大道699-22号18幢 赛特斯大楼
Mobile: 15850710386
Mail: su...@certusnet.com.cn
Website
)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
CertusNet
赛特斯信息科技股份有限公司
孙福林
Add: 江苏省南京市玄武区玄武大道699-22号18幢 赛特斯大楼
Mobile: 15850710386
Mail: su...@certusnet.com.cn
Website
18幢 赛特斯大楼
Mobile: 15850710386
Mail: su...@certusnet.com.cn
Website: www.certusnet.com.cn
Hi,
Really appreciate your hints and it did work finally.
CertusNet
赛特斯信息科技股份有限公司
孙福林
Add: 江苏省南京市玄武区玄武大道699-22号18幢 赛特斯大楼
Mobile: 15850710386
Mail: su...@certusnet.com.cn
Website: www.certusnet.com.cn
From: Gabriel Reid
Date: 2014-08-12 20:50
To: user
Subject: Re: Using Squirrel Sql
孙福林
Add: 江苏省南京市玄武区玄武大道699-22号18幢 赛特斯大楼
Mobile: 15850710386
Mail: su...@certusnet.com.cn
Website: www.certusnet.com.cn
43 matches
Mail list logo