I recently came across CockroachDB[0] which is a distributed SQL
database. Operationally, it is easy to run and adding a new node to the
cluster is really simple as well. I believe it is targeted towards OLTP
workloads.
Has anyone else had a look at CockroachDB? How does it compare with
Phoen
Hi all,
I am cross posting this from the Calcite mailing list, since the phoenix
query server uses Avatica from the Calcite project.
Go 1.8 was released recently and the database/sql package saw a lot of
new features. I just tagged the v1.3.0 release for the Go Avatica
driver[0] which ships
P.S. I meant to say normalizing rather than de-normalizing.
On 21/10/2016 10:36 AM, F21 wrote:
Hey all,
Normally, rather than de-normalizing my data, I prefer to have the
data duplicated in 2 tables. With transactions, it is quite simple to
ensure atomic updates to those 2 tables (especially
Hey all,
Normally, rather than de-normalizing my data, I prefer to have the data
duplicated in 2 tables. With transactions, it is quite simple to ensure
atomic updates to those 2 tables (especially for read-heavy apps). This
also makes things easier to query and avoids the memory limits of has
I am using Phoenix 4.8.1 with HBase 1.2.3 and the Phoenix Query Server.
I want to use a sequence to generate a monotonically increasing id for
each row. Since the documentation states that 100 sequence numbers are
cached by default in the client (in my case, I assume the caching would
happen i
I just ran into the following scenario with Phoenix 4.8.1 and HBase 1.2.3.
1. Create a transactional table: CREATE TABLE schemas(version varchar
not null primary key) TRANSACTIONAL=true
2. Confirm it exists/is created: SELECT * FROM schemas
3. Begin transaction.
4. Insert into schemas: UPSER
Hey,
You mentioned that you sent a PrepareAndExecuteRequest. However, to do
that, you would need to first:
1. Open a connection:
https://calcite.apache.org/docs/avatica_json_reference.html#openconnectionrequest
2. Create a statement:
https://calcite.apache.org/docs/avatica_json_reference.html#
Glad you got it working! :)
Cheers,
Francis
On 8/09/2016 7:11 PM, zengbaitang wrote:
I found the reason , because i have not set the env : HADOOP_CONF_DIR
while i set the env, the problem solved .
Thank you F21, thank you very much
ty";>Powered by Jetty:// Java Web
Server
------ --
*??:* "F21";;
*:* 2016??9??8??(??) 2:01
*??:* "user";
*:* Re: ?? Can query server run with hadoop ha mode??
Your logs do not seem to show any errors.
You mentioned that
curl or wget to get
http://your-phoenix-query-server:8765 to see if there's a response?
Cheers,
Francis
On 8/09/2016 3:54 PM, zengbaitang wrote:
hi F21 , I am sure hbase-site.xml was configured properly ,
here is my *hbase-site.xml (hbase side)*:
hbase.rootdir
hdfs://stage-cl
I have a test cluster running HDFS in HA mode with HBase + Phoenix on
docker running successfully.
Can you check if you have a properly configured hbase-site.xml that is
available to your phoenix query server? Make sure hbase.zookeeper.quorum
and zookeeper.znode.parent is present. If zookeeper
I am not sure what you mean here. The phoenix query server (which is
based on avatica, which is a subproject in the Apache Calcite project)
accepts both Protobufs and JSON depending on the value of
"phoenix.queryserver.serialization".
The server implements readers that will convert the Protobu
JIRA?
Thanks,
James
On Wednesday, August 31, 2016, F21 <mailto:f21.gro...@gmail.com>> wrote:
Hey Thomas,
Where are the Transaction Manager logs located? I have a
/tmp/tephra-/tephra-service--m9edd51-hmaster1.m9edd51.log, which
was where I got the logs from yesterday. The
Master
190 TransactionServiceMain
Cheers,
Francis
On 1/09/2016 4:01 AM, Thomas D'Silva wrote:
Can you check the Transaction Manager logs and see if there are any
error? Also can you do a jps and see confirm the Transaction Manager
is running ?
On Wed, Aug 31, 2016 at 2:12 AM, F21
, Co.
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Wed, Aug 31, 2016 at 5:39 AM, F21 <mailto:f21.gro...@gmail.com>> wrote:
Di
Did you build the image yourself? If so, you need to make
start-hbase-phoenix.sh executable before building it.
On 31/08/2016 8:02 PM, Cheyenne Forbes wrote:
" ': No such file or directory"
/../conf/:/opt/hbase/phoenix-c
542 root 0:00 /bin/bash
9035 root 0:00 sleep 1
9036 root 0:00 ps
bash-4.3# wget localhost:15165
Connecting to localhost:15165 (127.0.0.1:15165)
wget: error getting response: Connection reset by peer
On 31/08/2016 3:25 PM, F21 wrote:
This only seems
/08/2016 11:21 AM, F21 wrote:
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master
running on alpine linux with OpenJDK JRE 8.
This is my hbase-site.xml:
hbase.rootdir
hdfs://mycluster/hbase
zookeeper.znode.parent
/hbase
hbase.cluster.distributed
I have HBase 1.2.2 and Phoenix 4.8.0 running on my HBase master running
on alpine linux with OpenJDK JRE 8.
This is my hbase-site.xml:
hbase.rootdir
hdfs://mycluster/hbase
zookeeper.znode.parent
/hbase
hbase.cluster.distributed
true
hbase.zook
LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 7:23 PM, F21 <mailto:f21.gro...@gmail.com>> wrote:
Try running it with "bin
mnitech
Chief Operating Officer
Avapno Solutions
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 6:56 PM, F21 <ma
Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016
n P.O
Westmoreland
Jamaica
Email: cheyenne.osanu.for...@gmail.com
<mailto:cheyenne.osanu.for...@gmail.com>
Mobile: 876-881-7889
skype: cheyenne.forbes1
On Tue, Aug 23, 2016 at 5:49 PM, F21 <mailto:f21.gro...@gmail.com>> wrote:
It's possible to run phoenix with
It's possible to run phoenix with out hadoop using hbase in standalone
mode. However, you will not be able to do bulk load, etc. The safety of
your data is also not guaranteed without HDFS.
For reference, I have a docker image running hbase standalone with
phoenix for testing purposes:
https:
I haven't used this driver (don't write any .NET code), but I used it as
a reference while building (https://github.com/Boostport/avatica), in
particular, setting up the HTTP requests correctly.
Francis
On 28/06/2016 8:19 AM, Josh Elser wrote:
Hi,
I was just made aware of a neat little .NET
Hi all,
I have just open sourced a golang driver for Phoenix and Avatica.
The code is licensed using the Apache 2 License and is available here:
https://github.com/Boostport/avatica
Contributions are very welcomed :)
Cheers,
Francis
Awesome! Glad you got it working :)
On 27/04/2016 12:36 AM, Plamen Paskov wrote:
Hey Francis,
You da man :) It's working fine with phoenix 4.7 !
Thanks
On 21.04.2016 11:38, F21 wrote:
Hey Plamen,
Thanks for providing the information. You seem to be running a pretty
old version of Ph
Name": "java.lang.String"
},
{
"ordinal": 1,
"autoIncrement": false,
"caseSensitive": false,
"searchable": true,
"currency": false,
"nullable": 0,
umnClassName":"java.lang.Long"
}
],
"sql":null,
"parameters":[
],
"cursorFactory":{
"style":"LIST",
"clazz":null,
ot;: 0,
"tableName": "US_POPULATION",
"catalogName": "",
"type": {
"type": "scalar",
"id": 12,
"name": "VARCHAR",
uot;TABLE_CATALOG",
"schemaName": "",
"precision": 0,
"scale": 0,
"tableName": "SYSTEM.TABLE",
"catalogName": "",
"type": {
"type&q
equest": "createStatement",
"connectionId": 5
}
- select all cities
{
"request": "prepareAndExecute",
"connectionId": 5,
"statementId": 13,
"sql": "SELECT * FROM us_population",
"maxRowCount"
lue
at [Source: java.io.StringReader@41709697; line: 5, column: 17]
Powered by Jetty://
On 13.04.2016 19:27, Josh Elser wrote:
For reference materials: definitely check out
https://calcite.apache.org/avatica/
While JSON is easy to get started with, there are zero guarantees on
compatib
to create a php
wrapper library. if there are some books or references where i can
read more about apache phoenix will be very helpful.
thanks
On 13.04.2016 13:29, F21 wrote:
Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json
Your PrepareAndExecute request is missing a statementId:
https://calcite.apache.org/docs/avatica_json_reference.html#prepareandexecuterequest
Before calling PrepareAndExecute, you need to send a CreateStatement
request to the server so that it can give you a statementId. Then, use
that stateme
I am using HBase 1.1.3 with Phoenix 4.8.0-SNAPSHOT. To talk to phoenix,
I am using the phoenix query server with serialization set to JSON.
First, I create a non-transactional table:
CREATE TABLE my_table3 (k BIGINT PRIMARY KEY, v VARCHAR)
TRANSACTIONAL=false;
I then send the following reques
have a work around. Would you mind filing a Calcite bug for
the Avatica component after you finish your testing?
Thanks,
James
On Sat, Apr 2, 2016 at 4:10 AM, F21 <mailto:f21.gro...@gmail.com>> wrote:
I was able to successfully commit a transaction if I set the
serializati
problems doing commits when using the thin client
and Phoenix 4.6.0.
Hope this helps,
Steve
On Thu, Mar 31, 2016 at 11:25 PM, F21 <mailto:f21.gro...@gmail.com>> wrote:
As I mentioned about a week ago, I am working on a golang client
using protobuf serialization with the phoe
using the thin client
and Phoenix 4.6.0.
Hope this helps,
Steve
On Thu, Mar 31, 2016 at 11:25 PM, F21 <mailto:f21.gro...@gmail.com>> wrote:
As I mentioned about a week ago, I am working on a golang client
using protobuf serialization with the phoenix query serve
As I mentioned about a week ago, I am working on a golang client using
protobuf serialization with the phoenix query server. I have
successfully dealt with the serialization of requests and responses.
However, I am trying to commit a transaction and just doesn't seem to
commit.
Here's what I
been working in our environment. To
verify can you please try this? Copy only tephra and tephra-env.sh
files supplied with Phoenix in a new directory with HBASE_HOME env
variable set and then run tephra.
Thanks,
Mujtaba
On Wed, Mar 30, 2016 at 9:59 PM, F21 <mailto:f21.gro...@gmail.com>>
you think this might be a bug?
On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*
On Wed, Mar 30, 2016 at 5:42 PM, F21 <mailto:f21.gro...@gmail.com>> wrote:
Thanks for the hints.
If I remove t
I think that might be from the tephra start up script.
The folder /opt/hbase/phoenix-assembly/ does not exist on my system.
On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*
On Wed, Mar 30, 2016 at 5:42 PM, F21
enix-client.jar from classpath and see
if it complains about any missing library that you can add or
remove guava classes which are bundled in Phoenix-client.jar and then
start Tephra.
On Wed, Mar 30, 2016 at 5:07 PM, F21 <mailto:f21.gro...@gmail.com>> wrote:
I removed the foll
icate HBase classes
in hbase/lib.
- Check for exception starting tephra in
/tmp/tephra-*/tephra-service-*.log (assuming this is the log location
configured in your tephra-env.sh)
- mujtaba
On Wed, Mar 30, 2016 at 2:54 AM, F21 <mailto:f21.gro...@gmail.com>> wrote:
I have been
(assuming this is the log location
configured in your tephra-env.sh)
- mujtaba
On Wed, Mar 30, 2016 at 2:54 AM, F21 <mailto:f21.gro...@gmail.com>> wrote:
I have been trying to get tephra working, but wasn't able to get
it starting successfully.
I have a HDFS and HBas
I have been trying to get tephra working, but wasn't able to get it
starting successfully.
I have a HDFS and HBase 1.1 cluster running in docker containers. I have
confirmed that Phoenix, HDFS and HBase are both working correctly.
Phoenix and the Phoenix query server are also installed correct
Send your unsubscribe request to user-unsubscr...@phoenix.apache.org to
unsubscribe. :)
On 29/03/2016 4:54 PM, Dor Ben Dov wrote:
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement, you may
review at http://www.amdocs.
I am interested in building a Go client to query the phoenix query
server using protocol buffers.
The query server is running on http://localhost:8765, so I tried POSTing
to localhost:8765 with the marshalled protocol buffer as the body.
Unfortunately, the server responds with:
Hey Rafa,
So in terms of the hbase-site.xml, I just need the entries for the
location to the zookeeper quorum and the zookeeper znode for the cluster
right?
Cheers!
On 17/12/2015 9:48 PM, rafa wrote:
Hi F21,
You can install Query Server in any server that has network connection
with your
I have successfully deployed phoenix and the phoenix query server into a
toy HBase cluster.
I am currently running the http query server on all regionserver,
however I think it would be much better if I can run the http query
servers on separate docker containers or machines. This way, I can
51 matches
Mail list logo