Decrease HTTP chattiness?

2018-05-21 Thread Kevin Minder
It is possible to decrease the chattiness of the Phoenix JDBC driver operating in HTTP mode? We've tried using stmt.setFetchSize() but this appears to be ignored. As it stands new we appear to be getting about 100 rows per POST which presents a number of throughput issues when the results can be 10

Re: Async get

2017-10-04 Thread Kevin Liew
Wrapping a thread-blocking call in a Future makes it asynchronous, but does not turn it into a non-blocking call. https://www.google.ca/amp/blog.colinbreck.com/calling-blocking-code-there-is-no-free-lunch/amp/ On Wed, Oct 4, 2017 at 11:36 AM Stan Campbell wrote: > Wrap the call in a Future. Yo

Unsubscribe

2016-08-17 Thread Kevin Verhoeven
Please unsubscribe me from this list. Thanks!

Unsubscribe

2016-08-16 Thread Kevin Verhoeven
Unsubscribe

Re: how dose count(1) works?

2016-07-05 Thread kevin
at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) 2016-07-04 9:56 GMT+08:00 kevin : > hi,all: > I was created tables through phoenix and I load data through pig by > using org.apache.phoenix.pig.PhoenixHBaseStorage . if hbase was worked on > hdfs

how dose count(1) works?

2016-07-01 Thread kevin
hi,all I have a test about hbase run top of alluxio . In my hbase there is a table a create by phoenix an have 2880404 rows. I can run : count "A" on hbase ,but not on phoenix(select count(1) from A),if I do so,hbase region server will crash. so I want to know how dose count(1) works?what is t

RE: CDH 5.7.0

2016-05-16 Thread Kevin Verhoeven
I’ve had the same problem. Any help would be appreciated. Regards, Kevin From: Benjamin Kim [mailto:bbuil...@gmail.com] Sent: Monday, May 16, 2016 9:26 AM To: user@phoenix.apache.org Subject: CDH 5.7.0 Has anyone got Phoenix to work with CDH 5.7.0? I tried manually patching and building the

Re: error on concurrent JDBC query:Could not find hash cache

2016-05-12 Thread kevin
after I increase hbase.regionserver.handler.count value , the error was gone. 2016-05-12 15:57 GMT+08:00 kevin : > hi,all: > I try a concurrent query with 30 threads.and two thread fail with error: > > > WARN :[2016-05-12 > 15:03:50]org.apache.phoenix.ex

error on concurrent JDBC query:Could not find hash cache

2016-05-12 Thread kevin
hi,all: I try a concurrent query with 30 threads.and two thread fail with error: WARN :[2016-05-12 15:03:50]org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:361) : Hash plan [2] execution seems too slow. Earlier hash cache(s) might have expired on servers. WAR

Re: error Loading via MapReduce

2016-05-12 Thread kevin
; > hadoop.tmp.dir > /tmp > > > - Gabriel > > > On Wed, May 11, 2016 at 11:37 AM, kevin wrote: > >> *thanks,* >> *the property in hbase-site.xml is:* >> ** >> *hbase.tmp.dir* >> */home/dcos/hbase/tmp* >> ** >> >> *but the error is : f

Re: error Loading via MapReduce

2016-05-11 Thread kevin
take a look at that property. > > Thanks, > Sandeep Nemuri > ᐧ > > On Wed, May 11, 2016 at 1:49 PM, kevin wrote: > >> Thanks,I did't found fs.defaultFS property be overwritten . And I have >> change to use pig to load table data into Phoenix. >> >>

Re: error Loading via MapReduce

2016-05-11 Thread kevin
omewhere where the fs.defaultFS property is being overwritten. For > example, in hbase-site.xml? > > On Wed, May 11, 2016 at 3:59 AM, kevin wrote: > > I have tried to modify core-site.xml: > > > > hadoop.tmp.dir > > file:/home/dcos/hdfs/tmp ->> > >

Re: error Loading via MapReduce

2016-05-10 Thread kevin
I have tried to modify core-site.xml: hadoop.tmp.dir file:/home/dcos/hdfs/tmp ->> hdfs://master1:9000/tmp then the Loading via MapReduce commond successful build mr job,but this chang is wrong to hadoop the result is my hadoop cluster can't work. 2016-05-11 9:49 GMT+08:00 kevin

Re: error Loading via MapReduce

2016-05-10 Thread kevin
ectation is that it is written to > HDFS. > > - Gabriel > > > On Tue, May 10, 2016 at 9:14 AM, kevin wrote: > > core-site.xml : > > > > > > fs.defaultFS > > hdfs://master1:9000 > > > > > > hadoop.tmp.d

Re: error Loading via MapReduce

2016-05-10 Thread kevin
dfs.webhdfs.enabled true dfs.permissions false 2016-05-10 14:32 GMT+08:00 kevin : > thanks, what I use is from apache. and hadoop ,hbase was in cluster model > with one master and three slaves > > 2016-05-10 14:17 GMT+08:00 Gabriel Reid : > >> Hi, >> >> It

Re: error Loading via MapReduce

2016-05-09 Thread kevin
be a general > configuration issue. > > Are you running on a real distributed cluster, or a single-node setup? > Is this a vendor-based distribution (i.e. HDP or CDH), or apache > releases of Hadoop and HBase? > > - Gabriel > > On Tue, May 10, 2016 at 5:34 AM, kevin wrot

error Loading via MapReduce

2016-05-09 Thread kevin
hi,all : I use phoenix 4.6.0-hbase0.98 and hadoop 2.7.1,when I try Loading via MapReduce,I got error: 16/05/10 11:24:00 ERROR mapreduce.MultiHfileOutputFormat: the table logical name is USER 16/05/10 11:24:00 INFO client.HConnectionManager$HConnectionImplementation: Closing master protocol: Mast

how to decode phoenix data under hbase

2016-03-15 Thread kevin
HI,all I create a table under phoenix and upsert somedata. I turn to hbase client and scan the new table. I got data like : column=0:NAME, timestamp=1458028540810, value=\xE5\xB0\x8F\xE6\x98\x8E I don't know how to decode the value to normal string.what's the codeset?

RE: Announcing phoenix-for-cloudera 4.6.0

2015-11-03 Thread Kevin Verhoeven
Fantastic! Thank you! Kevin From: James Heather [mailto:james.heat...@mendeley.com] Sent: Tuesday, November 3, 2015 9:13 AM To: user@phoenix.apache.org Subject: Re: Announcing phoenix-for-cloudera 4.6.0 Bravo, Sir! Thanks for doing that. On 03/11/15 17:10, Andrew Purtell wrote: Today I pushed

RE: setting up community repo of Phoenix for CDH5?

2015-10-07 Thread Kevin Verhoeven
JM, If possible, I’d also like to try your 4.5.2 Parcel. Any luck with Cloudera? This would be very useful if made available. Thanks for your time! Kevin From: James Heather [mailto:james.heat...@mendeley.com] Sent: Wednesday, September 30, 2015 5:22 AM To: user@phoenix.apache.org Subject: Re

RE: Query DDL from Phoenix

2015-07-14 Thread Kevin Verhoeven
Thank you for your input, I appreciate your help! Kevin From: Cody Marcel [mailto:cmar...@salesforce.com] Sent: Tuesday, July 14, 2015 9:18 AM To: user@phoenix.apache.org Subject: Re: Query DDL from Phoenix This should do it. DatabaseMetaData dbmd = connection.getMetaData(); ResultSet

Query DDL from Phoenix

2015-07-13 Thread Kevin Verhoeven
Is there a way to query the table structure of user tables from Phoenix? For example, to retrieve the CREATE TABLE syntax? Thank you, Kevin Verhoeven

RE: CDH 5.4 and Phoenix

2015-06-23 Thread Kevin Verhoeven
files, along with some additional metadata that allows Cloudera Manager to understand what it is and how to use it. Kevin From: Serega Sheypak [mailto:serega.shey...@gmail.com] Sent: Tuesday, June 23, 2015 1:27 PM To: user@phoenix.apache.org Subject: Re: CDH 5.4 and Phoenix I read that labs

RE: installing Phoenix on a Cloudera 5.3.1 cluster

2015-02-27 Thread Kevin Verhoeven
Anil, Do you know the name of the configuration in Cloudera Manager where I can add a folder in the classpath of HBase? I’d like to use this configuration because every time I update CDH the jar is deleted. Kevin From: Brady, John [mailto:john.br...@intel.com] Sent: Friday, February 27, 2015

RE: sqllite: Upsert query times out after 10 minutes

2015-02-05 Thread Kevin Verhoeven
I figured this out, there was an old HBASE_CONF_PATH that pointed to a non-existent path, so the hbase-site.xml file was not being used. Setting this value corrected the problem. Simply point to the folder where the hbase-site.xml file is located: export HBASE_CONF_PATH=/phoenix/bin/ Kevin

RE: sqllite: Upsert query times out after 10 minutes

2015-02-05 Thread Kevin Verhoeven
nline I’ve see suggestions to set the PHOENIX_LIB_DIR, but that has not helped: export PHOENIX_LIB_DIR=/phoenix/bin/ Thanks for your help, Kevin From: su...@certusnet.com.cn [mailto:su...@certusnet.com.cn] Sent: Wednesday, February 4, 2015 5:27 PM To: user Subject: Re: sqllite: Upsert query time

sqllite: Upsert query times out after 10 minutes

2015-02-04 Thread Kevin Verhoeven
ows. There are six HBase RegionServers in the HBase cluster. How can the timeout be increased? Thank you, Kevin Verhoeven

Re: Index not used after successful creation

2015-01-29 Thread Kevin Verhoeven
That will solve my problem, thanks so much! Is there an upgrade path, or do I simply overwrite the 4.1 jars with the newest jars and restart HBase? Thanks, Kevin > On Jan 28, 2015, at 11:46 PM, James Taylor wrote: > > Hi Kevin, > This is a bug that has been fixed in 4.2.2.

Index not used after successful creation

2015-01-28 Thread Kevin Verhoeven
SERVER FILTER BY (TO_LONG(UPC) = 123456 AND NAME = 'A') | ++ 2 rows selected (0.056 seconds) Thank you for your time, Kevin Verhoeven kevin.verhoe...@ds-iq.com<mailto:kevin.verhoe...@ds-iq.com>