It is possible to decrease the chattiness of the Phoenix JDBC driver
operating in HTTP mode?
We've tried using stmt.setFetchSize() but this appears to be ignored.
As it stands new we appear to be getting about 100 rows per POST which
presents a number of throughput issues when the results can be 10
Wrapping a thread-blocking call in a Future makes it asynchronous, but does
not turn it into a non-blocking call.
https://www.google.ca/amp/blog.colinbreck.com/calling-blocking-code-there-is-no-free-lunch/amp/
On Wed, Oct 4, 2017 at 11:36 AM Stan Campbell
wrote:
> Wrap the call in a Future. Yo
Please unsubscribe me from this list.
Thanks!
Unsubscribe
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
2016-07-04 9:56 GMT+08:00 kevin :
> hi,all:
> I was created tables through phoenix and I load data through pig by
> using org.apache.phoenix.pig.PhoenixHBaseStorage . if hbase was worked on
> hdfs
hi,all
I have a test about hbase run top of alluxio . In my hbase there is a
table a create by phoenix an have 2880404 rows. I can run : count "A" on
hbase ,but not on phoenix(select count(1) from A),if I do so,hbase region
server will crash.
so I want to know how dose count(1) works?what is t
I’ve had the same problem. Any help would be appreciated.
Regards,
Kevin
From: Benjamin Kim [mailto:bbuil...@gmail.com]
Sent: Monday, May 16, 2016 9:26 AM
To: user@phoenix.apache.org
Subject: CDH 5.7.0
Has anyone got Phoenix to work with CDH 5.7.0? I tried manually patching and
building the
after I increase hbase.regionserver.handler.count value , the error was
gone.
2016-05-12 15:57 GMT+08:00 kevin :
> hi,all:
> I try a concurrent query with 30 threads.and two thread fail with error:
>
>
> WARN :[2016-05-12
> 15:03:50]org.apache.phoenix.ex
hi,all:
I try a concurrent query with 30 threads.and two thread fail with error:
WARN :[2016-05-12
15:03:50]org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:361)
: Hash plan [2] execution seems too slow. Earlier hash cache(s) might have
expired on servers.
WAR
;
> hadoop.tmp.dir
> /tmp
>
>
> - Gabriel
>
>
> On Wed, May 11, 2016 at 11:37 AM, kevin wrote:
>
>> *thanks,*
>> *the property in hbase-site.xml is:*
>> **
>> *hbase.tmp.dir*
>> */home/dcos/hbase/tmp*
>> **
>>
>> *but the error is : f
take a look at that property.
>
> Thanks,
> Sandeep Nemuri
> ᐧ
>
> On Wed, May 11, 2016 at 1:49 PM, kevin wrote:
>
>> Thanks,I did't found fs.defaultFS property be overwritten . And I have
>> change to use pig to load table data into Phoenix.
>>
>>
omewhere where the fs.defaultFS property is being overwritten. For
> example, in hbase-site.xml?
>
> On Wed, May 11, 2016 at 3:59 AM, kevin wrote:
> > I have tried to modify core-site.xml:
> >
> > hadoop.tmp.dir
> > file:/home/dcos/hdfs/tmp ->>
> >
I have tried to modify core-site.xml:
hadoop.tmp.dir
file:/home/dcos/hdfs/tmp ->>
hdfs://master1:9000/tmp
then the Loading via MapReduce commond successful build mr job,but this
chang is wrong to hadoop the result is my hadoop cluster can't work.
2016-05-11 9:49 GMT+08:00 kevin
ectation is that it is written to
> HDFS.
>
> - Gabriel
>
>
> On Tue, May 10, 2016 at 9:14 AM, kevin wrote:
> > core-site.xml :
> >
> >
> > fs.defaultFS
> > hdfs://master1:9000
> >
> >
> > hadoop.tmp.d
dfs.webhdfs.enabled
true
dfs.permissions
false
2016-05-10 14:32 GMT+08:00 kevin :
> thanks, what I use is from apache. and hadoop ,hbase was in cluster model
> with one master and three slaves
>
> 2016-05-10 14:17 GMT+08:00 Gabriel Reid :
>
>> Hi,
>>
>> It
be a general
> configuration issue.
>
> Are you running on a real distributed cluster, or a single-node setup?
> Is this a vendor-based distribution (i.e. HDP or CDH), or apache
> releases of Hadoop and HBase?
>
> - Gabriel
>
> On Tue, May 10, 2016 at 5:34 AM, kevin wrot
hi,all :
I use phoenix 4.6.0-hbase0.98 and hadoop 2.7.1,when I try Loading via
MapReduce,I got error:
16/05/10 11:24:00 ERROR mapreduce.MultiHfileOutputFormat: the table
logical name is USER
16/05/10 11:24:00 INFO client.HConnectionManager$HConnectionImplementation:
Closing master protocol: Mast
HI,all
I create a table under phoenix and upsert somedata. I turn to hbase
client and scan the new table.
I got data like :
column=0:NAME, timestamp=1458028540810, value=\xE5\xB0\x8F\xE6\x98\x8E
I don't know how to decode the value to normal string.what's the
codeset?
Fantastic! Thank you!
Kevin
From: James Heather [mailto:james.heat...@mendeley.com]
Sent: Tuesday, November 3, 2015 9:13 AM
To: user@phoenix.apache.org
Subject: Re: Announcing phoenix-for-cloudera 4.6.0
Bravo, Sir!
Thanks for doing that.
On 03/11/15 17:10, Andrew Purtell wrote:
Today I pushed
JM,
If possible, I’d also like to try your 4.5.2 Parcel. Any luck with Cloudera?
This would be very useful if made available. Thanks for your time!
Kevin
From: James Heather [mailto:james.heat...@mendeley.com]
Sent: Wednesday, September 30, 2015 5:22 AM
To: user@phoenix.apache.org
Subject: Re
Thank you for your input, I appreciate your help!
Kevin
From: Cody Marcel [mailto:cmar...@salesforce.com]
Sent: Tuesday, July 14, 2015 9:18 AM
To: user@phoenix.apache.org
Subject: Re: Query DDL from Phoenix
This should do it.
DatabaseMetaData dbmd = connection.getMetaData();
ResultSet
Is there a way to query the table structure of user tables from Phoenix? For
example, to retrieve the CREATE TABLE syntax?
Thank you,
Kevin Verhoeven
files, along with
some additional metadata that allows Cloudera Manager to understand what it is
and how to use it.
Kevin
From: Serega Sheypak [mailto:serega.shey...@gmail.com]
Sent: Tuesday, June 23, 2015 1:27 PM
To: user@phoenix.apache.org
Subject: Re: CDH 5.4 and Phoenix
I read that labs
Anil,
Do you know the name of the configuration in Cloudera Manager where I can add a
folder in the classpath of HBase? I’d like to use this configuration because
every time I update CDH the jar is deleted.
Kevin
From: Brady, John [mailto:john.br...@intel.com]
Sent: Friday, February 27, 2015
I figured this out, there was an old HBASE_CONF_PATH that pointed to a
non-existent path, so the hbase-site.xml file was not being used. Setting this
value corrected the problem. Simply point to the folder where the
hbase-site.xml file is located: export HBASE_CONF_PATH=/phoenix/bin/
Kevin
nline I’ve see suggestions to set the PHOENIX_LIB_DIR, but that has not
helped: export PHOENIX_LIB_DIR=/phoenix/bin/
Thanks for your help,
Kevin
From: su...@certusnet.com.cn [mailto:su...@certusnet.com.cn]
Sent: Wednesday, February 4, 2015 5:27 PM
To: user
Subject: Re: sqllite: Upsert query time
ows. There are six HBase
RegionServers in the HBase cluster. How can the timeout be increased?
Thank you,
Kevin Verhoeven
That will solve my problem, thanks so much! Is there an upgrade path, or do I
simply overwrite the 4.1 jars with the newest jars and restart HBase?
Thanks,
Kevin
> On Jan 28, 2015, at 11:46 PM, James Taylor wrote:
>
> Hi Kevin,
> This is a bug that has been fixed in 4.2.2.
SERVER FILTER BY (TO_LONG(UPC) = 123456 AND NAME = 'A') |
++
2 rows selected (0.056 seconds)
Thank you for your time,
Kevin Verhoeven
kevin.verhoe...@ds-iq.com<mailto:kevin.verhoe...@ds-iq.com>
29 matches
Mail list logo