I just downloaded the tephra 0.7.0 from github and extracted it into the
container.
Using the same setup as before, I ran:
export HBASE_CP=/opt/hbase/lib
export HBASE_HOME=/opt/hbase
Running the standalone tephra using ./tephra start worked correctly and
it was able to become the leader.
Do
I think that might be from the tephra start up script.
The folder /opt/hbase/phoenix-assembly/ does not exist on my system.
On 31/03/2016 11:53 AM, Mujtaba Chohan wrote:
I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*
On Wed, Mar 30, 2016 at 5:42 PM, F21
I still see you have the following on classpath:
opt/hbase/phoenix-assembly/target/*
On Wed, Mar 30, 2016 at 5:42 PM, F21 wrote:
> Thanks for the hints.
>
> If I remove the client jar, it complains about a missing class:
> 2016-03-31 00:38:25,929 INFO [main]
You definitely need hbase.zookeeper.quorum set to be able to
connect. I think what's happening is since you have phoenix client jar on
classpath (which is not needed as hbase/lib/* + phoenix-server.jar on
classpath should contain all the necessary libraries) contains guava v13
classes bundled
I removed the following from hbase-site.xml and tephra started correctly:
hbase.zookeeper.quorum
f826338-zookeeper.f826338
However, it now keeps trying to connect to zookeeper on localhost, which
wouldn't work, because my zookeeper is on another host:
2016-03-31 00:06:21,972
Jon,
I believe that it's just only metadata stuff. The VARCHAR
implementation itself doesn't rely on the size.
Thanks,
Sergey
On Wed, Mar 30, 2016 at 4:51 PM, Cox, Jonathan A wrote:
> Sergey,
>
> Thanks for the tip. Is there any real performance reason (memory or speed) to
>
Hey Mujtaba,
Thanks for the hints. I noticed that I still needed the client jar for
the phoenix server in order to get it to run.
I also checked out the tephra log and this is what I have:
Wed Mar 30 23:50:38 UTC 2016 Starting tephra service on
f826338-hmaster1.f826338
time(seconds)
Sergey,
Thanks for the tip. Is there any real performance reason (memory or speed) to
use a pre-defined length for VARCHAR? Or is it really all the same under the
hood?
-Jonathan
-Original Message-
From: sergey.solda...@gmail.com [mailto:sergey.solda...@gmail.com] On Behalf Of
Sergey
Jon,
It seems that documentation is a bit outdated. VARCHAR supports
exactly what you want:
create table x (id bigint primary key, x varchar);
upsert into x values (1, ". (a lot of text there) " );
0: jdbc:phoenix:localhost> select length(x) from x;
++
| LENGTH(X) |
++
To add a little more detail on this issue, the real problems appears to be that
a CSV containing the "\" character is being interpreted as an escape sequence
by Phoenix (java.lang.String). So I happened to have a row where a "\" appeared
directly before my delimiter. Therefore, my delimiter was
Actually, it seems that the line causing my problem really was missing a
column. I checked the behavior of StringToArrayConverter in
org.apache.phoenix.util.csv, and it does not exhibit such behavior.
So the fault is on my end.
Thanks
From: Cox, Jonathan A
Sent: Wednesday, March 30, 2016 3:36
Hi Jared,
Sounds like https://issues.apache.org/jira/browse/CALCITE-780
That version of Phoenix (probably) is using Calcite-1.2.0-incubating.
You could ask the vendor to update to a newer version, or use Phoenix
4.7.0 (directly from Apache) which is using Calcite-1.6.0.
Jared Katz wrote:
I am using the CsvBulkLoaderTool to ingest a tab separated file that can
contain empty columns. The problem is that the loader incorrectly interprets an
empty last column as a non-existent column (instead of as an null entry).
For example, imagine I have a comma separated CSV with the following
FYI for the group, I can confirm that this appears to work correctly and in an
automatic fashion.
Thanks, Sergey.
-Jon
-Original Message-
From: sergey.solda...@gmail.com [mailto:sergey.solda...@gmail.com] On Behalf Of
Sergey Soldatov
Sent: Wednesday, March 30, 2016 9:11 AM
To:
Is it possible to have the equivalent of the SQL data type "TEXT" with Phoenix?
The reason being, my data has columns with unspecified text length. If I go
with a varchar, loading the entire CSV file into the database may fail if one
entry is too long.
Maybe, however, there is really no
Few pointers:
- phoenix-core-*.jar is a subset of phoenix-*-server.jar so just
phoenix-*-server.jar in hbase/lib is enough for region servers and master.
- phoenix-server-*-runnable.jar and phoenix-*-server.jar should be enough
for query server. Client jar would only duplicate HBase classes in
16 matches
Mail list logo