Hi all,
As an FYI, I've got a pull request into Flyway (http://flywaydb.org/) for
Phoenix support:
https://github.com/flyway/flyway/pull/930
I don't know what everyone else is using for schema management, if anything
at all, but the preliminary support works well enough for Flyway's various
Hi James,
I had actually hoped to use the DatabaseMetaData originally, but I was
getting some interesting behaviour when using the 'getTables()' query when
'schemaPattern' was null. I'm not at my dev. machine to check for sure, but
I think I was getting back a list of all tables, rather than just
Hi, all
When counting on large table, we got the following exception
org.apache.hadoop.hbase.ipc.RpcClient$CallTimeoutException: Call id=,
waitTime=69714 rpcTimetout=6
How would that be resolved? Table size goes to 17.3G with issuing hdfs dfs -du.
Table with 90+ columns
and only one
Yes, single quotes for the default column family works.
CREATE TABLE TABLE (
C1 INTEGER NOT NULL,
C2 INTEGER NOT NULL,
C3 BIGINT NOT NULL,
C4 BIGINT NOT NULL,
C5 CHAR(2) NOT NULL,
V BIGINT
CONSTRAINT PK PRIMARY KEY (
C1,
C2,
C3,
C4,
Wow, that's really awesome, Josh. Nice work. Can you let us know
if/when it makes it in?
One modification you may want to consider in a future revision to
protect yourself in case the SYSTEM.CATALOG schema changes down the
road: Use the DatabaseMetaData APIs[1] instead of querying the
Hello James,
Yes, as low as 1500 rows /sec - using Phoenix JDBC with Batch Inserts of 1000
records at once, but there are at least 100 dynamic columns for each row.
I was expecting higher values of course - but I will finish soon coding a MR
job to load the same data using Hadoop.
The code I
Hi all,
How to better store java.math.BigInteger datatype in phoenix?
Phoenix's BIGINT datatype mapped to java.lang.Long
Is there any option except VARCHAR(19)?
Thanks
The information transmitted herein is intended only for the person or entity to
which it is