Encode/decode are for converting integer(qualifier counter) to bytes and
vice versa, so check if below existing APIs works for you:-
create table TABLE_NAME (PK integer primary key, COL1 varchar)
Connection conn = DriverManager.getConnection(getUrl());
PTable pTable = PhoenixRuntime.getTable(conn, "TABLE_NAME");
//get HBase column qualifier bytes by using column name:-
byte[] hbaseColumnQualifierBytes =
pTable.getColumnForColumnName("COL1").getColumnQualifierBytes();
//getting phoenix column name out of HBase column qualifier bytes
String phoenixColumnName =
pTable.getColumnForColumnQualifier("0".getBytes(),
hbaseColumnQualifierBytes).getName();
Regards,
Ankit Singhal
On Tue, Jan 8, 2019 at 10:03 AM Josh Elser <[email protected]> wrote:
> (from the peanut-gallery)
>
> That sounds to me like a useful utility to share with others if you're
> going to write it anyways, Anil :)
>
> On 1/8/19 12:54 AM, Thomas D'Silva wrote:
> > There isn't an existing utility that does that. You would have to look
> > up the COLUMN_QUALIFIER for the columns you are interested in from
> > SYSTEM.CATALOG
> > and use then create a Scan.
> >
> > On Mon, Jan 7, 2019 at 9:22 PM Anil <[email protected]
> > <mailto:[email protected]>> wrote:
> >
> > Hi Team,
> >
> > Is there any utility to read hbase data using hbase apis which is
> > created with phoniex with column name encoding ?
> >
> > Idea is to use the all performance and disk usage improvements
> > achieved with phoenix column name encoding feature and use our
> > existing hbase jobs for our data analysis.
> >
> > Thanks,
> > Anil
> >
> > On Tue, 11 Dec 2018 at 14:02, Anil <[email protected]
> > <mailto:[email protected]>> wrote:
> >
> > Thanks.
> >
> > On Tue, 11 Dec 2018 at 11:51, Jaanai Zhang
> > <[email protected] <mailto:[email protected]>> wrote:
> >
> > The difference since used encode column names that support
> > in 4.10 version(Also see PHOENIX-1598
> > <https://issues.apache.org/jira/browse/PHOENIX-1598>).
> > You can config COLUMN_ENCODED_BYTES property to keep the
> > original column names in the create table SQL, an example
> for:
> >
> > create table test(
> >
> > id varcharprimary key,
> >
> > col varchar
> >
> > )COLUMN_ENCODED_BYTES =0 ;
> >
> >
> >
> > ----------------------------------------
> > Jaanai Zhang
> > Best regards!
> >
> >
> >
> > Anil <[email protected] <mailto:[email protected]>> 于2018
> > 年12月11日周二 下午1:24写道:
> >
> > HI,
> >
> > We have upgraded phoenix to Phoenix-4.11.0-cdh5.11.2
> > from phoenix 4.7.
> >
> > Problem - When a table is created in phoenix, underlying
> > hbase column names and phoenix column names are
> > different. Tables created in 4.7 version looks good.
> Looks
> >
> > CREATE TABLE TST_TEMP (TID VARCHAR PRIMARY KEY ,PRI
> > VARCHAR,SFLG VARCHAR,PFLG VARCHAR,SOLTO VARCHAR,BILTO
> > VARCHAR) COMPRESSION = 'SNAPPY';
> >
> > 0: jdbc:phoenix:dq-13.labs.> select TID,PRI,SFLG from
> > TST_TEMP limit 2;
> > +-------------+------------+-----------+
> > | TID | PRI | SFLG |
> > +-------------+------------+-----------+
> > | 0060189122 | 0.00 | |
> > | 0060298478 | 13390.26 | |
> > +-------------+------------+-----------+
> >
> >
> > hbase(main):011:0> scan 'TST_TEMP', {LIMIT => 2}
> > ROW COLUMN+CELL
> > 0060189122
> > column=0:\x00\x00\x00\x00, timestamp=1544296959236,
> value=x
> > 0060189122
> > column=0:\x80\x0B, timestamp=1544296959236, value=0.00
> > 0060298478
> > column=0:\x00\x00\x00\x00, timestamp=1544296959236,
> value=x
> > 0060298478
> > column=0:\x80\x0B, timestamp=1544296959236,
> value=13390.26
> >
> >
> > hbase columns names are completely different than
> > phoenix column names. This change observed only post
> > up-gradation. all existing tables created in earlier
> > versions looks good and alter statements to existing
> > tables also looks good.
> >
> > Is there any workaround to avoid this difference? we
> > could not run hbase mapreduce jobs on hbase tables
> > created by phoenix. Thanks.
> >
> > Thanks
> >
> >
> >
> >
> >
> >
>