Re: UDF in SELECT

2015-05-02 Thread rajeshb...@apache.org
Hi Kathiresan,
Thanks for trying the UDFs.

bq. 1.Will any function
 (UDF
or built-in) in a SELECT be run only on client side in Phoenix?
Yes. if we use a function in select clause it will be evaluated at client
side.

bq. 2.Is there an option, i can push an UDF in a SELECT query to run on
server side?
If you use the functions in where clause they will be pushed down to the
server and run at server side.
That phoenix automatically do it for you.

Thanks,
Rajeshbabu.

On Sun, May 3, 2015 at 1:02 AM, Kathiresan S 
wrote:

> Hi,
>
> I've tried the new UDF feature of Phoenix.
>
> Below are the version details
>
> Phoenix : master (tag : v4.4.0-HBase-0.98-rc0)
> Hbase : hbase-0.98.12-hadoop2
>
> I created a UDF and tested with a simple SELECT statement. Looks like, its
> executed on the client side.
>
> Also, in one of the JIRAs
> , i see the statement "A
> function will be evaluated on the client side if it's used in the SELECT
> clause"
>
> *Couple of quesitons:*
>
> 1.Will any function  (UDF
> or built-in) in a SELECT be run only on client side in Phoenix?
> 2.Is there an option, i can push an UDF in a SELECT query to run on server
> side?
>
> Thanks,
> Kathir
>


[ANNOUNCE] Apache Phoenix 4.4-0-HBase-0.98 released

2015-05-21 Thread rajeshb...@apache.org
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.4.0-HBase-0.98 release compatible with HBase 0.98.x.
Highlights include:

- spark integration[1]
- query server[2]
- support for user defined functions[3]
- pherf - load tester measures throughput[4]
- support union All and union queries in subquery[5].
- 7.5x performance improvement for non aggregate, unordered queries
- many math and date/time buit-in functions
- mr job to populate indexes
- improvements in tracing and monitoring.
- some more improvements in CBO
- over 70 bug fixes and etc...

The release is available through maven or may be downloaded here [6].

Thanks to all the contributors who made this release possible!

Regards,
The Apache Phoenix Team

[1] http://phoenix.apache.org/phoenix_spark.html
[2] http://phoenix.apache.org/server.html
[3] http://phoenix.apache.org/udf.html
[4] http://phoenix.apache.org/pherf.html
[5] http://phoenix.apache.org/language/index.html#select
[6] http://phoenix.apache.org/download.html


[ANNOUNCE] Apache Phoenix 4.4-0-HBase-1.0 released

2015-05-21 Thread rajeshb...@apache.org
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.4.0-HBase-1.0 release compatible with HBase
1.0.x(1.0.1+).

The 4.4.0-HBase-1.0 release has feature parity with our 4.4.0-HBase-0.98
release.

Extra features include:
- support HBase HA Query(timeline-consistent region replica read)[1]
- alter session query support

The release is available through maven or may be downloaded here[2].

Thanks to all the contributors who made this release possible!

Regards,
The Apache Phoenix Team

[1] https://issues.apache.org/jira/browse/PHOENIX-1683
[2] http://phoenix.apache.org/download.html


[ANNOUNCE] Apache Phoenix 4.4-0-HBase-1.1 released

2015-05-30 Thread rajeshb...@apache.org
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.4.0-HBase-1.1 release compatible with HBase 1.1.x.

The 4.4.0-HBase-1.1 release has feature parity with our 4.4.0-HBase-1.0
release.

The release is available through maven or may be downloaded here[1].

Thanks to all the contributors who made this release possible!

Regards,
The Apache Phoenix Team

[1] http://phoenix.apache.org/download.html


Re: ClassNotFoundException for UDF class

2015-07-27 Thread rajeshb...@apache.org
Hi Anchal Agrawal,

Have you place the jar in HDFS? and the path_to_jar in the create function
is the URI for the jar in hdfs?

Thanks,
Rajeshbabu.


On Sat, Jul 25, 2015 at 5:58 AM, James Taylor 
wrote:

> Are you sure your hbase-site.xml is being picked up on the client-side?
> I've seen this happen numerous times. Maybe try setting something in there
> that would cause an obvious issue to confirm.
>
> I'm not away of anything else you need to do, but I'm sure Rajeshbabu will
> chime in if there is.
>
> Thanks,
> James
>
> On Fri, Jul 24, 2015 at 5:25 PM, Anchal Agrawal 
> wrote:
>
>> Hi James,
>>
>> Thanks for your email! I have set the hbase-site.xml configs. I tried
>> removing the dependent jars from the UDF jar and instead included the
>> dependencies in the classpath, but that didn't help.
>>
>> Is there anything else that I could be missing, or could I try out some
>> other debug steps?
>>
>> Thank you,
>> Anchal
>>
>>
>>   On Friday, July 24, 2015 3:29 PM, James Taylor 
>> wrote:
>>
>>
>> I don't believe you'd want to bundle the dependent jars iniside your jar
>> - I wasn't completely sure if that's what you've done. Also there's a
>> config you need to enable in your client-side hbase-site.xml to use this
>> feature.
>> Thanks,
>> James
>>
>> On Friday, July 24, 2015, Anchal Agrawal  wrote:
>>
>> Hi all,
>>
>> I'm having issues getting a UDF to work. I've followed the instructions
>>  and created a
>> jar, and I've created a function with the *CREATE FUNCTION *command.
>> However, when I use the function in a *SELECT* statement, I get a
>> *ClassNotFoundException* for the custom class I wrote. I'm using v4.4.0.
>>
>> Here's some debugging information:
>> 1. The UDF jar includes the dependency jars (phoenix-core, hbase,
>> hadoop-common, etc.), in addition to the UDF class itself. There are no
>> permission issues with the jar.
>> 2. I've tried putting the jar on the local FS, on my custom DFS, and also
>> in the HBase dynamic jar dir (as specified in hbase-site.xml).
>> 3. I've tried the *CREATE FUNCTION* command without giving the jar path
>> (the jar is present in the HBase dynamic jar dir).
>> 4. The Phoenix client doesn't report any syntax errors with my *CREATE
>> FUNCTION* command I'm using:
>> *create function GetValue(VARBINARY) returns UNSIGNED_LONG as
>> 'org.apache.phoenix.expression.function.GetValue' using jar 'path_to_jar';*
>>
>> 5. Here's part of the stack trace for the query *SELECT GetValue(pk)
>> FROM "table_name";* (full stack trace here 
>> )
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *Error: java.lang.reflect.InvocationTargetException
>> (state=,code=0)...Caused by: java.lang.ClassNotFoundException:
>> org.apache.phoenix.expression.function.GetValueat
>> java.net.URLClassLoader$1.run(URLClassLoader.java:372)at
>> java.net.URLClassLoader$1.run(URLClassLoader.java:361)at
>> java.security.AccessController.doPrivileged(Native Method)at
>> java.net.URLClassLoader.findClass(URLClassLoader.java:360)at
>> org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:147)
>> at
>> org.apache.phoenix.expression.function.UDFExpression.constructUDFFunction(UDFExpression.java:164)
>> ... 28 more*
>>
>> Am I missing something? I've studied the UDF documentation and searched
>> around for my issue but to no avail. The *GetValue* class is present in
>> the UDF jar, so I'm not sure what the root problem is. I would greatly
>> appreciate any help!
>>
>> Thanks,
>> Anchal
>>
>>
>>
>>
>


Re: Getting errors in Phoenix queries with JOIN clause[Phoenix version 3.2.2]

2015-07-27 Thread rajeshb...@apache.org
Hi Samantha,

You are facing this issue https://issues.apache.org/jira/browse/PHOENIX-2007
got fixed in latest code.
You get the error when there is no data in the table so you can load some
data and try.


Thanks,
Rajeshbabu.

On Mon, Jul 27, 2015 at 5:50 PM, Sumanta Gh  wrote:

> Hello,
>
> I'm getting an error (Error: Encountered exception in sub plan [0]
> execution. (state=,code=0)) with Apache Phoenix version 3.2.2 while I'm
> trying to execute a query with INNER JOIN or LEFT OUTER JOIN or with
> sub-query.I'm using sqlline.py client from hadoop1/bin directory found
> under Phoenix 3.2.2 distribution.In my case only simple queries(without
> JOIN and sub-query) are executing successfully.
>
> Please find my environment details as below:
>
> Operating System: Ubuntu 14.04
> HBase version: 0.94.18  (on Amazon EMR)
> Hadoop version: 1.0.4
> Apache Phoenix version: Phoenix 3.2.2
> Java: Oracle JDK 7
>
> I have also tried Phoenix 3.3.1 but I'm getting the same error.Could you
> please help me in this regard to resolve this issue.If I'm missing anything
> related the version compatibility or something else please also let me know.
>
> Regards
> Sumanta
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


Re: ClassNotFoundException for UDF class

2015-07-28 Thread rajeshb...@apache.org
bq. Rajeshbabu, instead of HDFS, I have a custom NFS setup for which I have
specified the *fs.nfs.impl* property in hbase-site. I've put the UDF jar in
this NFS setup, and also in the directory specified in the
*hbase.dynamic.jars.dir* property

In that case it should work. You can check the jar is getting loaded to
local directory specified for the below property.\


  hbase.local.dir
  ${hbase.tmp.dir}/local/
  Directory on the local filesystem to be used
as a local storage.



If the jar is not loaded then you can try placing the jar in the above
configured folder. But this workaround will not help if you use the udf in
where,group by clauses which will be converted to filters and server side
we try to load the same jar from the file system.

Thanks,
Rajeshbabu.

On Tue, Jul 28, 2015 at 1:13 AM, Anchal Agrawal 
wrote:

> Hi James and Rajeshbabu,
>
> Thank you for your replies. My hbase-site confs are being picked up, I
> have confirmed it by deliberately misconfiguring one of the properties.
>
> Rajeshbabu, instead of HDFS, I have a custom NFS setup for which I have
> specified the *fs.nfs.impl* property in hbase-site. I've put the UDF jar
> in this NFS setup, and also in the directory specified in the
> *hbase.dynamic.jars.dir* property. I've been using the same setup and
> confs with Pig and it works. I want to use Phoenix because it is
> significantly faster for my use cases.
>
> I'm still getting the same ClassNotFoundException for the UDF class. I've
> tried putting the UDF jar in multiple places, and as recommended by James,
> only the UDF class is in the jar. The UDF class dependencies are included
> in the classpath. Does Phoenix look for HDFS explicitly or does it just
> look at the *fs.some_fs.impl* property? The latter shouldn't be a problem
> for my setup. Does Phoenix write logs to HDFS? If it does, I can test
> whether it is able to find my NFS setup or not.
>
> Related question: Is putting the UDF jar in the local filesystem not
> supported by Phoenix?
>
> Sincerely,
> Anchal
>
>
>   On Monday, July 27, 2015 4:59 AM, "rajeshb...@apache.org" <
> chrajeshbab...@gmail.com> wrote:
>
>
> Hi Anchal Agrawal,
>
> Have you place the jar in HDFS? and the path_to_jar in the create function
> is the URI for the jar in hdfs?
>
> Thanks,
> Rajeshbabu.
>
>
> On Sat, Jul 25, 2015 at 5:58 AM, James Taylor 
> wrote:
>
> Are you sure your hbase-site.xml is being picked up on the client-side?
> I've seen this happen numerous times. Maybe try setting something in there
> that would cause an obvious issue to confirm.
>
> I'm not away of anything else you need to do, but I'm sure Rajeshbabu will
> chime in if there is.
>
> Thanks,
> James
>
> On Fri, Jul 24, 2015 at 5:25 PM, Anchal Agrawal 
> wrote:
>
> Hi James,
>
> Thanks for your email! I have set the hbase-site.xml configs. I tried
> removing the dependent jars from the UDF jar and instead included the
> dependencies in the classpath, but that didn't help.
>
> Is there anything else that I could be missing, or could I try out some
> other debug steps?
>
> Thank you,
> Anchal
>
>
>   On Friday, July 24, 2015 3:29 PM, James Taylor 
> wrote:
>
>
> I don't believe you'd want to bundle the dependent jars iniside your jar -
> I wasn't completely sure if that's what you've done. Also there's a config
> you need to enable in your client-side hbase-site.xml to use this feature.
> Thanks,
> James
>
> On Friday, July 24, 2015, Anchal Agrawal  wrote:
>
> Hi all,
>
> I'm having issues getting a UDF to work. I've followed the instructions
> <http://phoenix.apache.org/udf.html#Creating_Custom_UDFs> and created a
> jar, and I've created a function with the *CREATE FUNCTION *command.
> However, when I use the function in a *SELECT* statement, I get a
> *ClassNotFoundException* for the custom class I wrote. I'm using v4.4.0.
>
> Here's some debugging information:
> 1. The UDF jar includes the dependency jars (phoenix-core, hbase,
> hadoop-common, etc.), in addition to the UDF class itself. There are no
> permission issues with the jar.
> 2. I've tried putting the jar on the local FS, on my custom DFS, and also
> in the HBase dynamic jar dir (as specified in hbase-site.xml).
> 3. I've tried the *CREATE FUNCTION* command without giving the jar path
> (the jar is present in the HBase dynamic jar dir).
> 4. The Phoenix client doesn't report any syntax errors with my *CREATE
> FUNCTION* command I'm using:
> *create function GetValue(VARBINARY) returns UNSIGNED_LONG as
> 'org.apache.phoenix.expression.function.GetValue' u

Re: Problem using more than one user defined functions in a query

2015-08-20 Thread rajeshb...@apache.org
Hi Anchal,Rajmund

Sorry for late reply.

I have found the root cause for the issue and uploaded the patch at
PHOENIX-2151.
I have verified it's working fine. You can also try and check things are
fine or not.

Thanks Anchal for following up.

Thanks,
Rajeshbabu.

On Wed, Aug 19, 2015 at 10:32 PM, Anchal Agrawal 
wrote:

> Hi Rajmund,
>
> This is being tracked at the JIRA ticket -
> https://issues.apache.org/jira/browse/PHOENIX-2151. I'm also facing the
> same issue. Please add your comments/test cases to the ticket as necessary.
> Rajeshbabu could not reproduce the error, so I have sent him an email with
> my UDFs and he is investigating.
>
> Thank you,
> Anchal
>
>
>
> On Wednesday, August 19, 2015 12:13 AM, Bocsi Rajmund 
> wrote:
>
>
> Hi,
>
> I'm using phoenix-4.4.0-HBase-1.0 with hbase-1.0.1.1 and I experienced a
> strange problem. I created a view from the existing hbase table as it is
> suggested in the FAQ. In my database the row id (PK) contains the client
> id and the timestamp of the record among other pieces of informations.
> I created two user defined functions (as described here:
> https://phoenix.apache.org/udf.html): one that returns the client id and
> an other one that returns the timestamp. They work as expected:
>
>   select mysense_clientid(PK) from "baseapp:mysense-data" LIMIT 2;
>
> +--+
> |  MYSENSE_CLIENTID(PK)  |
> +--+
> | 0488fb527a654c4f85a9c43a082b3320|
> | 0488fb527a654c4f85a9c43a082b3320|
> +--+
>
> select mysense_timestamp(PK) from "baseapp:mysense-data" LIMIT 2;
>
> +-+
> |  MYSENSE_TIMESTAMP(PK)  |
> +-+
> | 2015-08-05 15:03:20.816 |
> | 2015-08-05 15:03:20.576 |
> +-+
>
> However when I try to use both functions in one query, the result is wrong:
>
> select mysense_clientid(PK), mysense_timestamp(PK) from
> "baseapp:mysense-data" LIMIT 2;
>
>
> +--+--+
> |  MYSENSE_CLIENTID(PK)  |
> MYSENSE_CLIENTID(PK)  |
>
> +--+--+
> | 0488fb527a654c4f85a9c43a082b3320|
> 0488fb527a654c4f85a9c43a082b3320|
> | 0488fb527a654c4f85a9c43a082b3320|
> 0488fb527a654c4f85a9c43a082b3320|
>
> +--+--+
>
> It seems the phoenix uses only the first one both times. I did not
> experience this kind of behaviour when I used the built-in functions.
>
> Is this a bug in the phoenix, or am I doing something wrong?
>
> Regards,
> Rajmund Bocsi
>
>
>


Re: index creation partly succeeds if it times out

2015-09-11 Thread rajeshb...@apache.org
Hi James,

You can drop the partially created index and try following steps

1) Add the following property to hbase-site.xml at phoenix client side.

phoenix.query.timeoutMs
double of default value

2) Export the HBASE_CONF_PATH with the configuration directory where
hbase-site.xml present.
3) then start sqlline.py command prompt
4) Then run create index query.


Thanks,
Rajeshbabu.

On Fri, Sep 11, 2015 at 3:26 PM, James Heather 
wrote:

> I just tried to create an index on a column for a table with 200M rows.
> Creating the index timed out:
>
> 0: jdbc:phoenix:172.31.31.143> CREATE INDEX idx_lastname ON loadtest.testing 
> (lastname);
>
> Error: Operation timed out (state=TIM01,code=6000)
>
> java.sql.SQLTimeoutException: Operation timed out
>
> at 
> org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:314)
>
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
>
>
> I bumped up the timeout and tried again, but it failed, and it tells me
> the index already exists:
>
> 0: jdbc:phoenix:172.31.31.143> CREATE INDEX idx_lastname ON loadtest.testing 
> (lastname);
>
> Error: ERROR 1013 (42M04): Table already exists. 
> tableName=LOADTEST.IDX_LASTNAME (state=42M04,code=1013)
>
> org.apache.phoenix.schema.TableAlreadyExistsException: ERROR 1013 (42M04): 
> Table already exists. tableName=LOADTEST.IDX_LASTNAME
>
> at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1692)
>
> at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1118)
>
> at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:280)
>
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:272)
>
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:271)
>
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1063)
>
> at sqlline.Commands.execute(Commands.java:822)
>
> at sqlline.Commands.sql(Commands.java:732)
>
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
>
> at sqlline.SqlLine.begin(SqlLine.java:681)
>
> at sqlline.SqlLine.start(SqlLine.java:398)
>
> at sqlline.SqlLine.main(SqlLine.java:292)
>
> 0: jdbc:phoenix:172.31.31.143> !indexes loadtest.testing
>
> +--+--+--+--+-+--++---+
>
> |TABLE_CAT |   TABLE_SCHEM
> |TABLE_NAME|
> NON_UNIQUE| INDEX_QUALIFIER |INDEX_NAME   
>  |TYPE| ORDINAL_POSIT |
>
> +--+--+--+--+-+--++---+
>
> |  | LOADTEST 
> | TESTING  | true 
> | | IDX_LASTNAME 
> | 3  | 1 |
>
> |  | LOADTEST 
> | TESTING  | true 
> | | IDX_LASTNAME 
> | 3  | 2 |
>
> +--+--+--+--+-+--++---+
>
> 0: jdbc:phoenix:172.31.31.143>
>
>
> Is this a bug? I don't really see how the index can be in a usable state.
> If I 'explain' a query that ought to use the index, it tells me it's going
> to do a full scan anyway.
>
> James
>


Re: index creation partly succeeds if it times out

2015-09-11 Thread rajeshb...@apache.org
James,
It should be in building state. Can you check what's the state of it?

Thanks,
Rajeshbabu.

On Fri, Sep 11, 2015 at 4:04 PM, James Heather 
wrote:

> Hi Rajeshbabu,
>
> Thanks--yes--I've done that. I'm now recreating the index with a long
> timeout.
>
> I reported it because it seemed to me to be a bug: Phoenix thinks that the
> index is there, but it's not. It ought to get cleaned up after a timeout.
>
> James
>
>
> On 11/09/15 11:32, rajeshb...@apache.org wrote:
>
> Hi James,
>
> You can drop the partially created index and try following steps
>
> 1) Add the following property to hbase-site.xml at phoenix client side.
> 
> phoenix.query.timeoutMs
> double of default value
> 
> 2) Export the HBASE_CONF_PATH with the configuration directory where
> hbase-site.xml present.
> 3) then start sqlline.py command prompt
> 4) Then run create index query.
>
>
> Thanks,
> Rajeshbabu.
>
> On Fri, Sep 11, 2015 at 3:26 PM, James Heather  > wrote:
>
>> I just tried to create an index on a column for a table with 200M rows.
>> Creating the index timed out:
>>
>> 0: jdbc:phoenix:172.31.31.143> CREATE INDEX idx_lastname ON loadtest.testing 
>> (lastname);
>>
>> Error: Operation timed out (state=TIM01,code=6000)
>>
>> java.sql.SQLTimeoutException: Operation timed out
>>
>> at 
>> org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:314)
>>
>> at 
>> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
>>
>>
>> I bumped up the timeout and tried again, but it failed, and it tells me
>> the index already exists:
>>
>> 0: jdbc:phoenix:172.31.31.143> CREATE INDEX idx_lastname ON loadtest.testing 
>> (lastname);
>>
>> Error: ERROR 1013 (42M04): Table already exists. 
>> tableName=LOADTEST.IDX_LASTNAME (state=42M04,code=1013)
>>
>> org.apache.phoenix.schema.TableAlreadyExistsException: ERROR 1013 (42M04): 
>> Table already exists. tableName=LOADTEST.IDX_LASTNAME
>>
>> at 
>> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1692)
>>
>> at 
>> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1118)
>>
>> at 
>> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>>
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:280)
>>
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:272)
>>
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>>
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:271)
>>
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1063)
>>
>> at sqlline.Commands.execute(Commands.java:822)
>>
>> at sqlline.Commands.sql(Commands.java:732)
>>
>> at sqlline.SqlLine.dispatch(SqlLine.java:808)
>>
>> at sqlline.SqlLine.begin(SqlLine.java:681)
>>
>> at sqlline.SqlLine.start(SqlLine.java:398)
>>
>> at sqlline.SqlLine.main(SqlLine.java:292)
>>
>> 0: jdbc:phoenix:172.31.31.143> !indexes loadtest.testing
>>
>> +--+--+--+--+-+--++---+
>>
>> |TABLE_CAT |   TABLE_SCHEM   
>>  |TABLE_NAME|
>> NON_UNIQUE| INDEX_QUALIFIER |INDEX_NAME  
>>   |TYPE| ORDINAL_POSIT |
>>
>> +--+--+--+--+-+--++---+
>>
>> |  | LOADTEST
>>  | TESTING  | true   
>>   | | IDX_LASTNAME   
>>   | 3  | 1 |
>>
>> |  | LOADTEST
>>  | TESTING 

Re: Issues while running psql.py localhost command

2015-09-21 Thread rajeshb...@apache.org
You can try adding below property to hbase-site.xml and restart hbase.

hbase.table.sanity.checks
false


Thanks,
Rajeshbabu.

On Mon, Sep 21, 2015 at 12:51 PM, Ashutosh Sharma <
ashu.sharma.in...@gmail.com> wrote:

> I am getting into issues while running phoenix psql.py command against my
> local Hbase instance.
>
> Local HBase is running perfectly fine. Any help?
>
> root@ashu-HP-ENVY-15-Notebook-PC:/phoenix-4.5.2-HBase-1.1-bin/bin#
> ./psql.py localhost /phoenix-4.5.2-HBase-1.1-src/examples/STOCK_SYMBOL.sql
> 15/09/21 00:19:26 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Class
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
> at
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> at
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> at
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> at
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
>
> at
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:889)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1223)
> at
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:113)
> at
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1937)
> at
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:751)
> at
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:186)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:320)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:312)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:310)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1422)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1927)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1896)
> at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1896)
> at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
> at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:208)
> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:192)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Class
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set
> hbase.table.sanity.checks to false at conf or table descriptor if you want
> to bypass sanity checks
> at
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
> at
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
> at
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
> at
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImp

Re: Issues while running psql.py localhost command

2015-09-21 Thread rajeshb...@apache.org
@James

You can check this https://issues.apache.org/jira/browse/HBASE-10591 for
more information.
Some sanity checks of table attributes are failing.

@Ashutosh  you can raise an issue to validate the table attributes which
are not meeting minimum criteria or else you can specify them as table
properties while creating the table.


On Mon, Sep 21, 2015 at 2:15 PM, James Heather 
wrote:

> I don't know for certain what that parameter does but it sounds a bit
> scary to me...
>
>
> On 21/09/15 09:41, rajeshb...@apache.org wrote:
>
> You can try adding below property to hbase-site.xml and restart hbase.
> 
> hbase.table.sanity.checks
> false
> 
>
> Thanks,
> Rajeshbabu.
>
> On Mon, Sep 21, 2015 at 12:51 PM, Ashutosh Sharma <
> ashu.sharma.in...@gmail.com> wrote:
>
>> I am getting into issues while running phoenix psql.py command against my
>> local Hbase instance.
>>
>> Local HBase is running perfectly fine. Any help?
>>
>> root@ashu-HP-ENVY-15-Notebook-PC:/phoenix-4.5.2-HBase-1.1-bin/bin#
>> ./psql.py localhost /phoenix-4.5.2-HBase-1.1-src/examples/STOCK_SYMBOL.sql
>> 15/09/21 00:19:26 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Class
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set
>> hbase.table.sanity.checks to false at conf or table descriptor if you want
>> to bypass sanity checks
>> at
>> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1597)
>> at
>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1529)
>> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1448)
>> at
>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:422)
>> at
>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:48502)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>> at
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> at
>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:889)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1223)
>> at
>> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:113)
>> at
>> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1937)
>> at
>> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:751)
>> at
>> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:186)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:320)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:312)
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:310)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1422)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1927)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1896)
>> at
>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1896)
>> at
>> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>> at
>> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>> at java.sql.DriverManager.getConnection(DriverManager.java:664)
>> at java.sql.DriverManager.getConnection(DriverManager.java:208)
>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:192)
>> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
>> org.apache

Re: [ANNOUNCE] New Apache Phoenix committer - Jan Fernando

2015-09-29 Thread rajeshb...@apache.org
Congratulations Jan!!!

Thanks,
Rajeshbabu.
On Sep 30, 2015 4:37 AM, "James Taylor"  wrote:

> Welcome, Jan. Great to have you onboard as a committer!
>
> James
>
> On Tuesday, September 29, 2015, Andrew Purtell 
> wrote:
>
>> Congratulations Jan, and welcome!
>>
>>
>> On Tue, Sep 29, 2015 at 11:23 AM, Eli Levine  wrote:
>>
>> > On behalf of the Apache Phoenix project I am happy to welcome Jan
>> Fernando
>> > as a committer. Jan has been an active user and contributor to Phoenix
>> in
>> > the last couple of years. Some of his major contributions are:
>> > 1) Worked deeply in the sequence code including implementing Bulk
>> Sequence
>> > Allocation: PHOENIX-1954 and debugging and fixing several tricky
>> Sequence
>> > Bugs:
>> > PHOENIX-2149, PHOENIX-1096.
>> > 2) Implemented DROP TABLE...CASCADE to support tenant-specific views
>> being
>> > dropped: PHOENIX-1098.
>> > 3) Worked closely with Cody and Mujtaba in the design of the interfaces
>> > for Pherf and contributed patches to increase support for
>> tenant-specific
>> > use cases: PHOENIX-1791, PHOENIX-2227 Pioneered creating Pherf
>> scenarios at
>> > Salesforce.
>> > 4) Worked closely with Samarth on requirements and API design and
>> > validation for Phoenix global- and query-level metrics:
>> > PHOENIX-1452, PHOENIX-1819 to get better visibility into Phoenix
>> internals.
>> >
>> > Look forward to continuing working with Jan on Apache Phoenix!
>> >
>> > Thanks,
>> >
>> > Eli Levine
>> > elilev...@apache.org
>> >
>>
>>
>>
>> --
>> Best regards,
>>
>>- Andy
>>
>> Problems worthy of attack prove their worth by hitting back. - Piet Hein
>> (via Tom White)
>>
>


Re: mapreduce.LoadIncrementalHFiles: Trying to load hfile... hang till we set permission on the tmp file

2015-10-28 Thread rajeshb...@apache.org
Hi,

You can set following configuration in hbase-site.xml
and export HBASE_CONF_PATH or HBASE_CONF_DIR with the configuration
directory before running the job.



property>
  hbase.coprocessor.region.classes
  
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint



Thanks,
Rajeshbabu.

On Wed, Oct 28, 2015 at 11:51 AM, Bulvik, Noam 
wrote:

> Thanks Matt ,
>
> Is this a known issue in the CSV Bulk Load Tool ? Do we need to open JIRA
> so it will be fixed ?
>
>
>
>
>
>
>
> *From:* Matt Kowalczyk [mailto:ma...@cloudability.com]
> *Sent:* Wednesday, October 28, 2015 1:01 AM
> *To:* user@phoenix.apache.org
> *Subject:* Re: mapreduce.LoadIncrementalHFiles: Trying to load hfile...
> hang till we set permission on the tmp file
>
>
>
> There might be a better way but my fix for this same problem was to modify
> the CsvBulkLoadTool.java to perform,
>
> FileSystem fs = FileSystem.get(conf);
> RemoteIterator ri =
> fs.listFiles(outputPath, true);
> while (ri.hasNext()) {
> LocatedFileStatus fileStatus = ri.next();
> LOG.info("chmod a+rwx on {}",
> fileStatus.getPath().getParent().toString());
> fs.setPermission(fileStatus.getPath().getParent(),
>  new FsPermission(FsAction.ALL,
> FsAction.ALL, FsAction.ALL));
> LOG.info("chmod a+rwx on {}",
> fileStatus.getPath().toString());
> fs.setPermission(fileStatus.getPath(), new
> FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL));
> }
>
> right before the the call to loader.doBulkLoad(outputPath, htable)
>
> This unfortunately requires that you modify the source. I'd be interested
> in a solution that doesn't require patching phoenix.
>
> -Matt
>
>
>
> On Tue, Oct 27, 2015 at 1:06 PM, Bulvik, Noam 
> wrote:
>
> Hi,
>
> We are running CSV bulk loader on phoenix 4.5 with CDH 5.4 and it works
> fine but with one problem. The loading task is hang on
> mapreduce.LoadIncrementalHFiles: Trying to load hfile .. until we give the
> directory holding the hfile (under /tmp of the HDFS) write permissions.
>
>
>
> We set umask to be 000 but it does not work.
>
>
>
> Any idea how it should be fixed
>
>
>
> thanks
>
>
>
> *Noam*
>
>
>
>
> --
>
>
> PRIVILEGED AND CONFIDENTIAL
> PLEASE NOTE: The information contained in this message is privileged and
> confidential, and is intended only for the use of the individual to whom it
> is addressed and others who have been specifically authorized to receive
> it. If you are not the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication is strictly
> prohibited. If you have received this communication in error, or if any
> problems occur with transmission, please contact sender. Thank you.
>
>
>
> --
>
> PRIVILEGED AND CONFIDENTIAL
> PLEASE NOTE: The information contained in this message is privileged and
> confidential, and is intended only for the use of the individual to whom it
> is addressed and others who have been specifically authorized to receive
> it. If you are not the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication is strictly
> prohibited. If you have received this communication in error, or if any
> problems occur with transmission, please contact sender. Thank you.
>


Re: Custom Aggregator Functions

2015-12-03 Thread rajeshb...@apache.org
Hi Jaime,

Currently User Defined Aggregate functions not supported.

Thanks,
Rajeshbabu.

On Wed, Dec 2, 2015 at 10:45 PM, Jaime Solano  wrote:

> Hi guys,
>
> We're planning to upgrade Phoenix 4.2.0 to Phoenix 4.4. As part of this
> process, we'll need to migrate our custom UDFs, with the new feature
> provided by 4.4. However, the documentation only describes how to create an
> Scalar function, not an aggregator one.
>
> Are User Defined Aggregator Functions supported?
>
> Thanks,
> -Jaime
>


Re: Two different UDFs called on same column return values from first UDF only

2015-12-24 Thread rajeshb...@apache.org
Hi Venu,

There is no work around for the issue. Only thing is we need to upgrade to
4.6.0.
4.6.0 has couple of fixes in UDFs.

Thanks,
Rajeshbabu.

On Thu, Dec 24, 2015 at 12:49 AM, Venu Madhav  wrote:

> I have two user defined functions which extends ScalarFunction taking same
> column as parameter and outputs different results.
> If i execute the functions seperately in two different select caluse i am
> getting the expected output.
> If i use the two functions in a same select clause the first function only
> is called twice and seeing the output of first function twice.
>
> Thanks Rajeshbabu for finding the phoenix jira
> https://issues.apache.org/jira/browse/PHOENIX-2151
>
>
> I am using phoenix 4.4.0 version and i see the fix got fixed in 4.6.0. Is
> there any work around i can do to get fixed in phoenix.4.4.0 version .
>
> Thanks
> venu
>
>
>


Re: Local index not used with non covered queries

2016-02-04 Thread rajeshb...@apache.org
Hi Jacobo,
The local index will be used if you have any where condition on indexed
column otherwise we need to scan index table and data table for each row.
That's the reason why it's not using local indexes.

There is no index merging currently in Phoenix. There is an improvement
task raised for it(PHOENIX-1801
).

Thanks,
Rajeshbabu.

On Wed, Feb 3, 2016 at 5:31 PM, Jacobo Coll  wrote:

> Hi,
>
> I recently started testing HBase/Phoenix as a storage solution for our
> data. The problem is that I am not able to execute some "simple queries". I
> am using phoenix 4.5.2 and hbase 1.0.0-cdh5.4.0. After creating a table and
> making some selects (full scan), I started using local indexes to
> accelerate it. As I know, global indexes only work for covered queries, but:
>
> Unlike global indexes, local indexes *will* use an index even when all
>> columns referenced in the query are not contained in the index.
>>
>
> Knowing this, I prepared a simple test. A table with 3 columns: the
> primary key, a locally indexed column and a non indexed column.
> Querying by the indexed column works as expected, but if I try to use both
> columns, it does a full scan.
>
> I know that I can include both columns on the index, but this is not
> supposed to be required.
>
> In a previous mail I wrote a more detailed example of the test:
> http://mail-archives.apache.org/mod_mbox/phoenix-user/201512.mbox/%3CCAOJgazo-oCahZxsXrdOOsJxm-DVp7SzcNZhDDNJ2DWzkiNhfvA%40mail.gmail.com%3E
>
> Am I doing something wrong, or missing some step?
> Or it is impossible what I am trying to do?
> Is there some way to do this without include the other columns in the
> index?
>
> Thanks,
> Jacobo Coll
>


Re: Local index not used with non covered queries

2016-02-04 Thread rajeshb...@apache.org
bq. if the where condition refers to an indexed column and a non indexed
column, it should use the index?

This case also we will not use the index because we need to know the values
of non indexed columns first to apply the filter which we need to get from
the data table. So better to include both the columns in the local index.


Thanks,
Rajeshbabu.

On Thu, Feb 4, 2016 at 5:44 PM, Jacobo Coll  wrote:

> Hi Rajeshbabu,
>
> Thanks for the quick answer!
> I will keep an eye on that issue. I was expecting this to be working at
> least for local indexes.
>
> So, if the where condition refers to an indexed column and a non indexed
> column, it should use the index?
> I have tried this, and its not working for me.
>
> --
> create table test_table (mykey varchar primary key, col1 varchar, col2
> varchar);
> create local index idx2 on test_table (col2);
> upsert into test_table (mykey, col1, col2) values('k1', 'v1-1', 'v1-2');
> upsert into test_table (mykey, col1, col2) values('k2', 'v2-1', 'v2-2');
>
> -- select using the indexed column
> select * from test_table where col2 = 'v1-2';
> explain select * from test_table where col2 = 'v1-2';
>
> -- select using the indexed column and the non indexed column
> select * from test_table where col2 = 'v1-2' and col1 = 'v1-1';
> explain select * from test_table where col2 = 'v1-2' and col1 = 'v1-1';
> ------
>
> The first select is using the index, but the second is not. I don't know if I 
> am missing something.
>
> Thanks,
> Jacobo Coll
>
>
> 2016-02-04 9:13 GMT+00:00 rajeshb...@apache.org 
> :
>
>> Hi Jacobo,
>> The local index will be used if you have any where condition on indexed
>> column otherwise we need to scan index table and data table for each row.
>> That's the reason why it's not using local indexes.
>>
>> There is no index merging currently in Phoenix. There is an improvement
>> task raised for it(PHOENIX-1801
>> <https://issues.apache.org/jira/browse/PHOENIX-1801>).
>>
>> Thanks,
>> Rajeshbabu.
>>
>> On Wed, Feb 3, 2016 at 5:31 PM, Jacobo Coll  wrote:
>>
>>> Hi,
>>>
>>> I recently started testing HBase/Phoenix as a storage solution for our
>>> data. The problem is that I am not able to execute some "simple queries". I
>>> am using phoenix 4.5.2 and hbase 1.0.0-cdh5.4.0. After creating a table and
>>> making some selects (full scan), I started using local indexes to
>>> accelerate it. As I know, global indexes only work for covered queries, but:
>>>
>>> Unlike global indexes, local indexes *will* use an index even when all
>>>> columns referenced in the query are not contained in the index.
>>>>
>>>
>>> Knowing this, I prepared a simple test. A table with 3 columns: the
>>> primary key, a locally indexed column and a non indexed column.
>>> Querying by the indexed column works as expected, but if I try to use
>>> both columns, it does a full scan.
>>>
>>> I know that I can include both columns on the index, but this is not
>>> supposed to be required.
>>>
>>> In a previous mail I wrote a more detailed example of the test:
>>> http://mail-archives.apache.org/mod_mbox/phoenix-user/201512.mbox/%3CCAOJgazo-oCahZxsXrdOOsJxm-DVp7SzcNZhDDNJ2DWzkiNhfvA%40mail.gmail.com%3E
>>>
>>> Am I doing something wrong, or missing some step?
>>> Or it is impossible what I am trying to do?
>>> Is there some way to do this without include the other columns in the
>>> index?
>>>
>>> Thanks,
>>> Jacobo Coll
>>>
>>
>>
>


Re: Delete records

2016-02-28 Thread rajeshb...@apache.org
Hi Nanda,

In 4.4.0 version we are creating indexes as mutable indexes even the table
has immutable rows. So delete is not getting allowed.
PHOENIX-2616 fixed to make indexes immutable when data table has immutable
rows.
As a work around you can create indexes also with IMMUTABLE_ROWS=true
option then deletes should work fine.

Thanks,
Rajeshbabu.


On Fri, Feb 26, 2016 at 12:15 PM, Nanda  wrote:

> Hi James,
>
> Thanks for the reply.
>
> I was not able to find the exact corner case of the issue when tried with
> a sample table.
>
> Can you please let me know if there is something i am doing wrong?
>
>
>
> But in our project we are facing this issue when we try to delete the data
> from the below table,
>
> Below are the tables and indexes we have in our project,
>
> CREATE TABLE four_g.s11_lusr_hourly (
> subscriber.msisdn VARCHAR,
> subscriber.plan VARCHAR,
> subscriber.imei VARCHAR,
> basestation.e_node_b_id VARCHAR,
> session_data.mme_ip VARCHAR,
> session_data.sgw_ip VARCHAR,
> session_data.user_ip VARCHAR,
> session_data.apn VARCHAR,
> session_data.rat VARCHAR,
> session_data.start_time TIMESTAMP,
> session_data.end_time TIMESTAMP,
> session_data.teid_uplink VARCHAR,
> session_data.teid_downlink VARCHAR,
> session_data.ipv4_uplink VARCHAR,
> session_data.ipv4_downlink VARCHAR,
> report_time TIMESTAMP NOT NULL,
> imsi VARCHAR NOT NULL,
> default_bearer_id BIGINT NOT NULL,
> dedicated_bearer_id BIGINT NOT NULL,
> rolled_up_count INTEGER,
>basestation.name VARCHAR,
> basestation.latitude FLOAT,
> basestation.longitude FLOAT,
> basestation.rnc_id VARCHAR,
> basestation.mobile_operator_name VARCHAR,
> basestation.mobile_operator_country VARCHAR,
> basestation.sector_id VARCHAR,
> subscriber.device_type VARCHAR,
> subscriber.device_model VARCHAR,
> subscriber.device_vendor VARCHAR,
> subscriber.mobile_operator_name VARCHAR,
> subscriber.mobile_operator_country VARCHAR,
> subscriber.roaming_status VARCHAR
> CONSTRAINT s11_lusr_hourly_pk PRIMARY KEY (
> report_time,
> imsi,
> default_bearer_id,
> dedicated_bearer_id
> )
> )
> SALT_BUCKETS = 4,
> IMMUTABLE_ROWS = true,
> COMPRESSION = 'GZ';
>
>
> Below are the secondary indexes i have,
>
> CREATE INDEX LUSR_HOURLY_INDEX_DASHBOARD ON FOUR_G.S11_LUSR_HOURLY
> (report_time, start_time, end_time, sgw_ip, user_ip, e_node_b_id, apn, rat,
> device_type, device_model, device_vendor);
> CREATE INDEX LUSR_HOURLY_INDEX_DASHBOARD_MAP ON FOUR_G.S11_LUSR_HOURLY
> (report_time, start_time, end_time, sgw_ip, user_ip, e_node_b_id, apn, rat,
> device_type, device_model, device_vendor) INCLUDE (name, latitude,
> longitude, sector_id, imsi);
> CREATE INDEX LUSR_HOURLY_INDEX_HISTORY ON FOUR_G.S11_LUSR_HOURLY
> (report_time, start_time, end_time, sgw_ip, user_ip, roaming_status)
> INCLUDE (apn, rat, e_node_b_id, device_type, device_vendor, device_model);
> CREATE INDEX LUSR_HOURLY_INDEX_TRENDS ON FOUR_G.S11_LUSR_HOURLY
> (report_time, start_time, end_time, sgw_ip, user_ip, roaming_status, name,
> device_type, apn, rat) include (msisdn);
> CREATE INDEX LUSR_HOURLY_INDEX_CATEGORY ON FOUR_G.S11_LUSR_HOURLY
> (report_time, start_time, end_time, sgw_ip, user_ip, msisdn, e_node_b_id,
> device_type) include (device_vendor, device_model);
> CREATE INDEX LUSR_HOURLY_INDEX_KPI ON FOUR_G.S11_LUSR_HOURLY (report_time,
> start_time, end_time, sgw_ip, user_ip,roaming_status, apn, rat, name,
> device_type) include (msisdn, plan, imsi);
>
>
> Below is the exception i am getting,
>
> Caused by: java.sql.SQLException: ERROR 1027 (42Y86): All columns referenced 
> in a WHERE clause must be available in every index for a table with immutable 
> rows. tableName=FOUR_G.S11_LUSR_HOURLY
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:389)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:546)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:534)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:225)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1130)
>
>
>
>
>
> Thanks,
>
> Nanda
>
>
>

Re: after server restart - getting exception - java.io.IOException: Timed out waiting for lock for row

2016-06-22 Thread rajeshb...@apache.org
+user@phoenix

Hi Vishnu,

Can you try restarting the region server where you are seeing the timeout
on rowlocks. Would be helpful if you share RS logs.
Can you provide the details like what kind of operations done before
restart and would you like the share the schemas tables.

Thanks,
Rajeshbabu.

On Thu, Jun 23, 2016 at 9:31 AM, vishnu rao  wrote:

> i tried the following:
>
> 1) truncating system stats did not work.
> 2) phoenix.stats.useCurrentTime=false
>
> but no luck - the wait time increased even further
>
> On Thu, Jun 23, 2016 at 9:04 AM, vishnu rao  wrote:
>
> > Hi Biju
> >
> > Yes local index
> >
> > . It all started when 1 box crashed.
> >
> > When I brought up a new one the error was localized to the new box.
> >
> > After cluster restart - it's spread to all servers.
> >
> > I shall attempt to clear system stats and increase meta cache size
> > Vishnu,
> > Are you using "local index" on any of the tables? We have seen
> similar
> > issues while using "local index".
> >
> > On Wed, Jun 22, 2016 at 12:25 PM, vishnu rao 
> wrote:
> >
> > > the server dies when trying to take the thread dump.
> > >
> > > i believe i am experiencing this bug
> > >
> > > https://issues.apache.org/jira/browse/PHOENIX-2508
> > >
> > > On Wed, Jun 22, 2016 at 5:03 PM, Heng Chen 
> > > wrote:
> > >
> > > > which thread hold the row lock? could you dump the jstack with
> 'jstack
> > -l
> > > > pid' ?
> > > >
> > > > 2016-06-22 16:14 GMT+08:00 vishnu rao :
> > > >
> > > > > hi Heng.
> > > > >
> > > > > 2016-06-22 08:13:42,256 WARN
> > > > > [B.defaultRpcServer.handler=32,queue=2,port=16020]
> > > regionserver.HRegion:
> > > > > Failed getting lock in batch put,
> > > > > row=\x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
> > > > >
> > > > > java.io.IOException: Timed out waiting for lock for row:
> > > > > \x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
> > > > >
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
> > > > >
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
> > > > >
> > > > > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> > > > >
> > > > > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> > > > >
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> > > > >
> > > > > at
> > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> > > > >
> > > > > at java.lang.Thread.run(Thread.java:745)
> > > > >
> > > > > On Wed, Jun 22, 2016 at 3:50 PM, Heng Chen <
> heng.chen.1...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Could you paste the whole jstack and relates rs log?   It seems
> row
> > > > write
> > > > > > lock was occupied by some thread.  Need more information to find
> > it.
> > > > > >
> > > > > > 2016-06-22 13:48 GMT+08:00 vishnu rao :
> > > > > >
> > > > > > > need some help. this has happened for 2 of my servers
> > > > > > > -
> > > > > > >
> > > > > > > *[B.defaultRpcServer.handler=2,queue=2,port=16020]
> > > > > regionserver.HRegion:
> > > > > > > Failed getting lock in batch put,
> > > > > > > row=a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*
> > > > > > >
> > > > > > > *java.io.IOException: Timed out waiting for lock for row:
> > > > > > > a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*
> > > > > > >
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
> > > > > > >
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
> > > > > > >
> > > > > > > at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
> > > > > > >
> > > > > > > at
> > > > > > >
> > > > > >
> > > > >
> > > >
> 

Re: dropping Phoenix local index is not dropping the local index table in HBase

2016-06-29 Thread rajeshb...@apache.org
Since we are storing all local indexes data in a single shared table that's
why we are not dropping when we drop a local index.
We can check for any local indexes or not and then we can drop it.

Now as part of PHOENIX-1734 we have reimplemented local indexes and storing
local indexes also in same data table.

Thanks,
Rajeshbabu.

On Tue, Jun 28, 2016 at 4:45 PM, Vamsi Krishna 
wrote:

> Team,
>
> I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
> *Question: *Dropping Phoenix local index is not dropping the local index
> table in HBase. Can someone explain why?
>
> Phoenix:
> CREATE TABLE IF NOT EXISTS VAMSI.TABLE_B (COL1 VARCHAR(36) , COL2
> VARCHAR(36) , COL3 VARCHAR(36) CONSTRAINT TABLE_B_PK PRIMARY KEY (COL1))
> COMPRESSION='SNAPPY', SALT_BUCKETS=5;
> CREATE LOCAL INDEX IF NOT EXISTS IDX_TABLE_A_COL2 ON VAMSI.TABLE_A (COL2);
> DROP INDEX IF EXISTS IDX_TABLE_A_COL2 ON VAMSI.TABLE_A;
>
> hbase(main):012:0> list '_LOCAL.*'
> TABLE
> _LOCAL_IDX_VAMSI.TABLE_A
>
> Thanks,
> Vamsi Attluri
> --
> Vamsi Attluri
>


Re: CsvBulkLoadTool not populating Actual Table & Local Index Table when '-it' option specified

2016-07-05 Thread rajeshb...@apache.org
Hi Vamsi,

There is a bug with local indexes in 4.4.0 which is fixed in 4.7.0
https://issues.apache.org/jira/browse/PHOENIX-2334

Thanks,
Rajeshbabu.

On Tue, Jul 5, 2016 at 6:21 PM, Vamsi Krishna 
wrote:

> Team,
>
> I'm working on HDP 2.3.2 (Phoenix 4.4.0, HBase 1.1.2).
> When I use '-it' option of CsvBulkLoadTool neither Acutal Table nor Local
> Index Table is loaded.
> *Command:*
> *HADOOP_CLASSPATH=/usr/hdp/current/hbase-master/lib/hbase-protocol.jar:/etc/hbase/conf
> yarn jar /usr/hdp/current/phoenix-client/phoenix-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool
> -Dmapreduce.job.queuename=$QUEUE_NAME -s VAMSI -t TABLE_A -c COL1,COL2,COL3
> -it IDX_TABLE_A_COL2 -i test/test_data.csv -d ',' -z $ZOOKEEPER_QUORUM*
>
> When I use the same command without specifying the '-it' option it
> populates the Actual Table but not Local Index Table (Which is as expected).
> *Command:*
> *HADOOP_CLASSPATH=/usr/hdp/current/hbase-master/lib/hbase-protocol.jar:/etc/hbase/conf
> yarn jar /usr/hdp/current/phoenix-client/phoenix-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool
> -Dmapreduce.job.queuename=$QUEUE_NAME -s VAMSI -t TABLE_A -c COL1,COL2,COL3
> -i test/test_data.csv -d ',' -z $ZOOKEEPER_QUORUM*
>
> Could someone please help me if you see anything wrong with what I'm doing?
>
> Here is how I'm setting up my table:
> CREATE TABLE IF NOT EXISTS VAMSI.TABLE_A (COL1 VARCHAR(36) , COL2
> VARCHAR(36) , COL3 VARCHAR(36) CONSTRAINT TABLE_A_PK PRIMARY KEY (COL1))
> COMPRESSION='SNAPPY', SALT_BUCKETS=5;
> CREATE LOCAL INDEX IF NOT EXISTS IDX_TABLE_A_COL2 ON VAMSI.TABLE_A (COL2);
> upsert into vamsi.table_a values ('abc123','abc','123');
> upsert into vamsi.table_a values ('def456','def','456');
>
> test_data.csv contains 2 records:
> ghi789,ghi,789
> jkl012,jkl,012
>
> Thanks,
> Vamsi Attluri
> --
> Vamsi Attluri
>


Fwd: Issue in Bulkload when Table name contains '-' (hyphen)

2016-07-14 Thread rajeshb...@apache.org
This is a bug. You can raise an issue Dharmesh,

Thanks,
Rajeshbabu.



-- Forwarded message --
From: Dharmesh Guna 
Date: Thu, Jul 14, 2016 at 2:57 PM
Subject: RE: Issue in Bulkload when Table name contains '-' (hyphen)
To: "rajeshb...@apache.org" 


Hi Rajesh,



Here is a select query.

CREATE TABLE "RB-TEST" (ID INTEGER PRIMARY KEY, NAME VARCHAR);



-Dharmesh



*From:* rajeshb...@apache.org [mailto:chrajeshbab...@gmail.com]
*Sent:* Thursday, July 14, 2016 2:45 PM
*To:* Dharmesh Guna

*Subject:* Re: Issue in Bulkload when Table name contains '-' (hyphen)



I see. Can you just provide the create table query?



Thanks,

Rajeshbabu.



On Thu, Jul 14, 2016 at 2:42 PM, Dharmesh Guna  wrote:

Thanks Rajesh for quick response.
Yes. I am able to upsert values from Phoenix sqlline thin client if I
mention table name as "DMG-TEST".
I will raise Jira ticket shortly.

-Dharmesh


-Original Message-
From: rajeshb...@apache.org [mailto:chrajeshbab...@gmail.com]
Sent: Thursday, July 14, 2016 2:39 PM
To: d...@phoenix.apache.org
Cc: dev-ow...@phoenix.apache.org
Subject: Re: Issue in Bulkload when Table name contains '-' (hyphen)

Hi Dharmesh,

Are you able to write to table through normal upsert query? Seems like a
bug. You can raise a JIRA for this.

Thanks,
Rajeshbabu.


On Thu, Jul 14, 2016 at 11:22 AM, Dharmesh Guna  wrote:

> Dear All,
>
> I am facing issue in bulk loading from csv files into Phoenix when my
> Phoenix table contains '-' (hyphen) in table name.
> I am using Phoenix version 4.7.0. Phoenix supports '-' (hyphen) in
> table name as per naming conventions. [a-zA-Z_0-9-.] If I try to bulk
> load using below command it fails and gives below error.
>
> sudo -u hadoop
> HADOOP_CLASSPATH=/usr/lib/hbase/hbase-protocol.jar:/usr/lib/hbase/conf
> / hadoop jar /usr/lib/phoenix/phoenix-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool
> -Dfs.permissions.umask-mode=000 -t 'DMG-TEST' --input
> "/user/test/1/DMG-TEST.csv"  -d $'\t'
>
> So ultimately it removes any single or double quotes around table name
> before upserting which causes error 601.
>
> 2016-07-14 05:39:07,380 WARN [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics
> system already initialized!
> 2016-07-14 05:39:08,944 INFO [main]
> org.apache.phoenix.util.UpsertExecutor: Upserting SQL data with UPSERT
> INTO DMG-TEST ("APPLICATIONID", "0"."PACKAGEID") VALUES (?, ?)
> 2016-07-14 05:39:08,945 INFO [main] org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2016-07-14 05:39:08,952 INFO [main]
> org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor
> [.snappy]
> 2016-07-14 05:39:08,959 WARN [main] org.apache.hadoop.mapred.YarnChild:
> Exception running child : java.lang.RuntimeException:
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00):
> Syntax error. Encountered "-" at line 1, column 17.
> at
>
org.apache.phoenix.util.UpsertExecutor.createStatement(UpsertExecutor.java:83)
> at
> org.apache.phoenix.util.UpsertExecutor.(UpsertExecutor.java:94)
> at
>
org.apache.phoenix.util.csv.CsvUpsertExecutor.(CsvUpsertExecutor.java:63)
> at
>
org.apache.phoenix.mapreduce.CsvToKeyValueMapper.buildUpsertExecutor(CsvToKeyValueMapper.java:85)
> at
>
org.apache.phoenix.mapreduce.FormatToBytesWritableMapper.setup(FormatToBytesWritableMapper.java:142)
> at
>
org.apache.phoenix.mapreduce.CsvToKeyValueMapper.setup(CsvToKeyValueMapper.java:67)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
>
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.phoenix.exception.PhoenixParserException: ERROR
> 601
> (42P00): Syntax error. Encountered "-" at line 1, column 17.
> at
>
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at
>
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1185)
> at
>
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(P

Re: Join of multi index tables.

2016-07-20 Thread rajeshb...@apache.org
Hi William,

Currently we are not supporting index merge optimizations.

Here is the issue I have raised for the same thing. Would be better to
share details there to show the interest.
https://issues.apache.org/jira/browse/PHOENIX-1801

Can't you combine the columns in both indexes into a single index to avoid
index merge and make the query run faster?

Thanks,
Rajeshbabu.


On Wed, Jul 20, 2016 at 2:33 PM, William  wrote:

> Hi all,
>  I have a question about global  secondary index in Phoenix 4.6.  See the
> following statements:
>
> create table yy (pk integer primary key, cf.a integer, cf.b integer);
> create index yya on yy(cf.a);
> create index yyb on yy(cf.b);
>
> then upsert some data into table yy; do the following query:
>
> select pk from yy where cf.a = 1 or cf.b = 2;
> select /*+INDEX(yy yya yyb)*/ pk from yy where cf.a = 1 or cf.b = 2;
>
> I expect that both index tables will be used in this query, and join the
> results from both index tables and return.
> But unfortunately, no index tables has been used but a full table scan
> instead. The statement with a hint have the same behaviour. The explain
> plan is :
>
> FULL SCAN OVER YY
> SERVER FILTER BY (CF.A = 1 OR CF.B = 2)
>
> Another example:
>
> create index yyi on yy (cf.a) include (cf.b);
> select pk from yy where cf.a = 1 or cf.b =2;
>
> then this query will hit the index table with a filter. The explain plan:
>
> FULL SCAN OVER YYI
>  SERVER FILTER BY (TO_INTEGER("A") = 1 OR CF."B" = 2)
>
> my question is : I only have index yya and yyb, and i want the above
> select statement to hit both index tables,  do we support this scenario?
> if so, how can i use both indexes?
> if not, why? It is too hard to implement ? Is there a plan to support it ?
>
> Thanks
> William
>
>
>
>


Re: Phoenix custom UDF

2016-10-03 Thread rajeshb...@apache.org
Hi Akhil,

There is no support for UDAFs in Phoenix at present.

Thanks,
Rajeshbabu.

On Sun, Oct 2, 2016 at 6:57 PM, akhil jain  wrote:

> Thanks James. It worked.
>
> Can you please provide me pointers to write UDAFs in phoenix like we
> have GenericUDAFEvaluator for writing Hive UDAFs.
> I am looking for a tutorial like http://beekeeperdata.com/
> posts/hadoop/2015/08/17/hive-udaf-tutorial.html for phoenix.
>
> Thanks,
> Akhil
>
> On Sun, Oct 2, 2016 at 7:03 AM, James Taylor 
> wrote:
>
>> Hi Akhil,
>> You want to create an Array, convert it to its byte[] representation, and
>> set the ptr argument to point to it. Take a look at ArrayIT for examples of
>> creating an Array:
>>
>> // Create Array of FLOAT
>> Float[] floatArr =  new Float[2];
>> floatArr[0] = 64.87;
>> floatArr[1] = 89.96;
>> Array array = conn.createArrayOf("FLOAT", floatArr);
>> // Convert to byte[]
>> byte[] arrayAsBytes = PFloatArray.INSTANCE.toBytes(array);
>> // Set ptr to byte[]
>> ptr.set(arrayAsBytes);
>>
>> Thanks,
>> James
>>
>>
>> On Sat, Oct 1, 2016 at 9:19 AM, akhil jain 
>> wrote:
>>
>>> I am using hbase 1.1 with phoenix 4.8. I have a table with columns whose
>>> datatype is 'VARBINARY'.
>>> The data in these columns is compressed float[] in form of ByteBuffer
>>> called DenseVector which is an ordered set of 16 bit IEEE floats of
>>> cardinality no more than 3996.
>>> I have loaded data into phoenix tables through spark-phoenix plugin.
>>> Just to give an idea the mapreduce jobs write data in hive in parquet gzip
>>> format. I read data into a dataframe using sqlContext.parquetFile() ,
>>> register it as temp table and run a sqlContext.sql("select query ...")
>>> query and finally calling res.save("org.apache.phoenix.spark",
>>> SaveMode.Overwrite, Map("table" -> "SITEFLOWTABLE" ,"zkUrl" ->
>>> "localhost:2181"))
>>> We have a hive/shark UDF(code pasted below) that can decode these
>>> ByteBuffer columns and display them in readable float[]. So this UDF works
>>> on spark-shell.
>>> Now I just want to write a similar UDF in phoenix and run queries as "
>>> select uplinkcostbuffer,DENSEVECTORUDF(uplinkcostbuffer) from
>>> siteflowtable" and further write UDAFs over it.
>>> How do I make phoenix UDF return float[] ?? I have tried a lot many
>>> things but none seem to work.
>>>
>>> Below is the code for hive/shark UDF-
>>> 
>>> --
>>> package com.ABCD.densevectorudf;
>>>
>>> import java.nio.ByteBuffer;
>>> import java.util.Vector;
>>>
>>> import org.apache.commons.logging.Log;
>>> import org.apache.commons.logging.LogFactory;
>>> import org.apache.hadoop.hive.ql.exec.Description;
>>> import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
>>> import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException;
>>> import org.apache.hadoop.hive.ql.metadata.HiveException;
>>> import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
>>> import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInsp
>>> ector;
>>> import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
>>> import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspecto
>>> rFactory;
>>> import org.apache.hadoop.hive.serde2.objectinspector.primitive.Bina
>>> ryObjectInspector;
>>> import org.apache.hadoop.hive.serde2.objectinspector.primitive.Prim
>>> itiveObjectInspectorFactory;
>>> import org.apache.hadoop.hive.serde2.objectinspector.primitive.Stri
>>> ngObjectInspector;
>>> import org.apache.hadoop.io.FloatWritable;
>>>
>>> import com.ABCD.common.attval.IDenseVectorOperator;
>>> import com.ABCD.common.attval.Utility;
>>> import com.ABCD.common.attval.array.BufferOperations;
>>> import com.ABCD.common.attval.array.FloatArrayFactory;
>>>
>>> @Description(name = "DenseVectorUDF",
>>> value = "Dense Vector UDF in Hive / Shark\n"
>>> + "_FUNC_(binary|hex) - "
>>> + "Returns the dense vector array value\n",
>>> extended = "Dense Vector UDAF in Hive / Shark")
>>>
>>> public class DenseVectorUDF extends GenericUDF {
>>> private static final Log LOG = LogFactory.getLog(DenseVectorU
>>> DF.class.getName());
>>> private ObjectInspector inputOI;
>>> private ListObjectInspector outputOI;
>>>
>>> @Override
>>> public String getDisplayString(String[] children) {
>>> StringBuilder sb = new StringBuilder();
>>> sb.append("densevectorudf(");
>>> for (int i = 0; i < children.length; i++) {
>>> sb.append(children[i]);
>>> if (i + 1 != children.length) {
>>> sb.append(",");
>>> }
>>> }
>>> sb.append(")");
>>> return sb.toString();
>>> }
>>>
>>> @Override
>>> public ObjectInspector initialize(ObjectInspector[] arguments) throws
>>> UDFArgumentException {
>>> if (arguments.length == 1) {
>>> ObjectInspector first = arguments[0];
>>> if (!(first instanceof StringObjectInspector) && !(first instanceof
>>> BinaryObjectInspector)) {
>>> LOG.error("first argument must be a either binary or hex buffer");
>>> throw new UDFArgumentException("fir

Re: Paged Queries Not Working

2017-03-24 Thread rajeshb...@apache.org
Offset is supported from Phoenix 4.8.0+ onwards.

https://issues.apache.org/jira/browse/PHOENIX-2722

Thanks,
Rajeshbabu.

On Fri, Mar 24, 2017 at 1:13 PM, Bernard Quizon <
bernard.qui...@stellarloyalty.com> wrote:

> Hi,
>
> I was using versions phoenix-4.4.0-hbase-1.1 and phoenix-4.7.0-hbase-1.1
> to test LIMIT and OFFSET
> But queries are resulting to errors:
>
> Samples:
>
> 0: jdbc:phoenix:localhost> SELECT * FROM A.SEGMENT ORDER BY field Limit 10
> offset 10;
> Error: ERROR 602 (42P00): Syntax error. Missing "EOF" at line 1, column
> 49. (state=42P00,code=602)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00):
> Syntax error. Missing "EOF" at line 1, column 49.
> at org.apache.phoenix.exception.PhoenixParserException.newException(
> PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.
> parseStatement(PhoenixStatement.java:1185)
> at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(
> PhoenixStatement.java:1268)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(
> PhoenixStatement.java:1339)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: MissingTokenException(inserted [@-1,0:0=' EOF>',<-1>,1:48] at offset)
> at org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(
> PhoenixSQLParser.java:350)
> at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
> at org.apache.phoenix.parse.PhoenixSQLParser.statement(
> PhoenixSQLParser.java:510)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
> ... 9 more
>
> 0: jdbc:phoenix:localhost> SELECT * FROM A.SEGMENT offset 10 limit 10;
> Error: ERROR 602 (42P00): Syntax error. Missing "EOF" at line 1, column
> 32. (state=42P00,code=602)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00):
> Syntax error. Missing "EOF" at line 1, column 32.
> at org.apache.phoenix.exception.PhoenixParserException.newException(
> PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.
> parseStatement(PhoenixStatement.java:1185)
> at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(
> PhoenixStatement.java:1268)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(
> PhoenixStatement.java:1339)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: MissingTokenException(inserted [@-1,0:0=' EOF>',<-1>,1:31] at 10)
> at org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(
> PhoenixSQLParser.java:350)
> at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
> at org.apache.phoenix.parse.PhoenixSQLParser.statement(
> PhoenixSQLParser.java:510)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
> ... 9 more
>
> Reference: https://phoenix.apache.org/paged.html
>
> Is it not supported yet on the versions I mentioned above?
>
> Thanks!
>


Re: UDF doesn't work with Sort Merge Join Hint

2017-04-07 Thread rajeshb...@apache.org
It's a bug and already fixed
https://issues.apache.org/jira/browse/PHOENIX-2841.

On Fri, Apr 7, 2017 at 1:18 PM, Bernard Quizon <
bernard.qui...@stellarloyalty.com> wrote:

> HI,
>
> I'm using Phoenix-4.4-hbase-1.1 and whenever I run a statement that has a
> UDF in the SELECT clause with the sort merge join hint, it results to the
> error function undefined.
>
> Sample:
>
> SELECT /*+ USE_SORT_MERGE_JOIN */
> my_udf(test_column) as udf
> FROM table1 a
> INNER JOIN
> table2 b
> on a.id = b.id
>
> Is this a known bug? If it is, is there a workaround?
>
>
> Thanks!
>
>
>
>


Re: How do local indexes work?

2017-06-29 Thread rajeshb...@apache.org
9,10 slides gives details how read path works.

https://www.slideshare.net/rajeshbabuchintaguntla/local-secondary-indexes-in-apache-phoenix

Let's know if you need more information.

Thanks,
Rajeshbabu.

On Fri, Jun 30, 2017 at 4:20 AM, Neelesh  wrote:

> Hi,
>The documentation says  - "From 4.8.0 onwards we are storing all local
> index data in the separate shadow column families in the same data table".
>
> It is not quite clear to me how the read path works with local indexes. Is
> there any document that has some details on how it works ? PHOENIX-1734 has
> some (shadow CFs), but not enough.
>
> Any pointers are appreciated!
>
> Thanks
>
>


Re: How do local indexes work?

2017-06-29 Thread rajeshb...@apache.org
Yes Neelesh, at present need to touch all the regions and there is a JIRA
for the optimization[1 <https://issues.apache.org/jira/browse/PHOENIX-3941>]

[1] https://issues.apache.org/jira/browse/PHOENIX-3941

Thanks,
Rajeshbbu.

On Fri, Jun 30, 2017 at 11:16 AM, Neelesh  wrote:

> Thanks for the slides, Rajesh Babu.
>
> Does this mean any read path will have to scan all regions of a table? Is
> there an optimization available if the primary key and the index share a
> common prefix, thus reducing the number of regions to look at?
>
> Thanks again!
>
>
>
> On Jun 29, 2017 7:24 PM, "rajeshb...@apache.org" 
> wrote:
>
> 9,10 slides gives details how read path works.
>
> https://www.slideshare.net/rajeshbabuchintaguntla/local-seco
> ndary-indexes-in-apache-phoenix
>
> Let's know if you need more information.
>
> Thanks,
> Rajeshbabu.
>
> On Fri, Jun 30, 2017 at 4:20 AM, Neelesh  wrote:
>
>> Hi,
>>The documentation says  - "From 4.8.0 onwards we are storing all local
>> index data in the separate shadow column families in the same data table".
>>
>> It is not quite clear to me how the read path works with local indexes.
>> Is there any document that has some details on how it works ? PHOENIX-1734
>> has some (shadow CFs), but not enough.
>>
>> Any pointers are appreciated!
>>
>> Thanks
>>
>>
>
>


Re: Forward: Re: local index turn disable when region split

2017-11-06 Thread rajeshb...@apache.org
Hi Vergil,

Thanks for sharing the information. Can you please share the complete logs
from where you have collected the info?
Thanks,
Rajeshbabu.

On Mon, Nov 6, 2017 at 10:08 AM, vergil  wrote:

> Thank for your reply.
>
> Here is My test environment:
> hbase:1.2.5
> phoenix:4.12
>
> Phoenix table with local index。
>
> The region will wait for a long time when the region split。
> *The region server webapp page show below:*
> Start TimeDescriptionStateStatus
> Mon Nov 06 17:49:27 CST 2017 Closing region TEST_TABLE_LOCAL,\x01,
> 1509961563238.9070a4f2ace4516f985c6c9e31427eee. RUNNING (since 1mins,
> 46sec ago) Finished memstore flush of ~39.94 MB/41879888,
> currentsize=14.31 MB/15002912 for region TEST_TABLE_LOCAL,\x01,
> 1509961563238.9070a4f2ace4516f985c6c9e31427eee. in 856ms,
> sequenceid=41861, compaction requested=true (since 1mins, 45sec ago)
> *The region server wait for a long time and log show below :*
> 2017-11-06 18:02:06,450 INFO  
> [B.defaultRpcServer.handler=27,queue=0,port=16020]
> index.PhoenixIndexFailurePolicy: Successfully disabled index
> TEST_INDEX_LOCAL due to an exception while writing updates.
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:
> Failed to write to multiple index tables
> at org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCom
> mitter.write(TrackingParallelWriterIndexCommitter.java:229)
> at org.apache.phoenix.hbase.index.write.IndexWriter.write(
> IndexWriter.java:193)
> at org.apache.phoenix.hbase.index.write.IndexWriter.
> writeAndKillYourselfOnFailure(IndexWriter.java:154)
> at org.apache.phoenix.hbase.index.write.IndexWriter.
> writeAndKillYourselfOnFailure(IndexWriter.java:143)
> at org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.
> java:653)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:610)
> at org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(
> Indexer.java:593)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(
> RegionCoprocessorHost.java:1034)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$
> RegionOperation.call(RegionCoprocessorHost.java:1673)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.
> execOperation(RegionCoprocessorHost.java:1749)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.
> execOperation(RegionCoprocessorHost.java:1705)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.
> postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
> at org.apache.hadoop.hbase.regionserver.HRegion.
> doMiniBatchMutation(HRegion.java:3322)
> at org.apache.hadoop.hbase.regionserver.HRegion.
> batchMutate(HRegion.java:2881)
> at org.apache.hadoop.hbase.regionserver.HRegion.
> batchMutate(HRegion.java:2823)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> doBatchOp(RSRpcServices.java:758)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> doNonAtomicRegionMutation(RSRpcServices.java:720)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> multi(RSRpcServices.java:2168)
> at org.apache.hadoop.hbase.protobuf.generated.
> ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2188)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
>
> 2017-11-06 18:02:06,493 WARN  [hconnection-0x52dee0d4-shared--pool423-t2]
> client.AsyncProcess: #394, table=TEST_TABLE_LOCAL, attempt=11/11
> failed=40ops, last exception: org.apache.hadoop.hbase.RegionTooBusyException:
> org.apache.hadoop.hbase.RegionTooBusyException: failed to get a lock in
> 6 ms. regionName=TEST_TABLE_LOCAL,\x01,1509961563238.
> 9070a4f2ace4516f985c6c9e31427eee., server=ht06,16020,1509939923675
> at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:8176)
> at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:8162)
> at org.apache.hadoop.hbase.regionserver.HRegion.
> startRegionOperation(HRegion.java:8071)
> at org.apache.hadoop.hbase.regionserver.HRegion.
> batchMutate(HRegion.java:2866)
> at org.apache.hadoop.hbase.regionserver.HRegion.
> batchMutate(HRegion.java:2823)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> doBatchOp(RSRpcServices.java:758)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> doNonAtomicRegionMutation(RSRpcServices.java:720)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.
> multi(RSRpcServices.java:2168)
> at org.apache.hadoop.hbase.client.MultiServerCallable.
> call(MultiServerCallable.java:128)
>
>
> *The region will go back to normal after above exception,then table's
> local index turn disable.*
>
> -- Original --
> *From: * "Ted Yu";;
> *Send time:* Saturday, Nov 4, 2017 0:09 AM
> *To:* "us

[ANNOUNCE] Apache Phoenix 5.0.0 released

2018-07-13 Thread rajeshb...@apache.org
The Apache Phoenix team is pleased to announce release of it's next major
version 5.0.0
compatible with HBase 2.0+. Apache Phoenix enables SQL-based OLTP and
operational
analytics for Apache Hadoop using Apache HBase as its backing store and
providing
integration with other projects in the Apache ecosystem such as Spark,
Hive, Pig, Flume, and
MapReduce.

The 5.0.0 release has feature parity with recently released 4.14.0.
Highlights of the release include:

* Cleanup deprecated APIs and leveraged new Performant APIs
* Refactored coprocessor implementations to make use new Coprocessor or
Observer APIs in HBase 2.0
* Hive and Spark Integration works in latest versions of Hive(3.0.0) and
Spark(2.3.0) respectively.

For more details, visit our blog here [1] and download source and binaries
here [2].

Thanks,
Rajeshbabu (on behalf of the Apache Phoenix team)

[1]
https://blogs.apache.org/phoenix/entry/apache-phoenix-releases-next-major
[2] http://phoenix.apache.org/download.html


Re: New committer: Istvan Toth

2019-12-03 Thread rajeshb...@apache.org
Congratulations Istvan!!

Thanks,
Rajeshbabu.

On Tue, Dec 3, 2019, 10:06 PM Josh Elser  wrote:

> Everyone,
>
> On behalf of the PMC, I'm pleased to announce Apache Phoenix's newest
> committer: Istvan Toth.
>
> Please join me in extending a warm welcome to Istvan -- congratulations
> on this recognition. Thank you for your contributions and we all look
> forward to more involvement in the future!
>
> - Josh
>


Re: [ANNOUNCE] New Phoenix committer Viraj Jasani

2021-02-10 Thread rajeshb...@apache.org
Congratulations Viraj!!

On Wed, Feb 10, 2021, 2:01 PM Viraj Jasani  wrote:

> Thanks for warm wishes! Looking forward to continued involvement.
>
>
> On Wed, 10 Feb 2021 at 6:57 AM, Kadir Ozdemir
>  wrote:
>
>> Congratulations, Viraj!
>>
>> On Tue, Feb 9, 2021 at 3:25 PM Ankit Singhal  wrote:
>>
>> > Congratulations, Viraj!
>> >
>> > On Tue, Feb 9, 2021 at 2:18 PM Xinyi Yan  wrote:
>> >
>> >> Congratulations and welcome, Viraj!
>> >>
>> >> On Tue, Feb 9, 2021 at 2:07 PM Geoffrey Jacoby 
>> >> wrote:
>> >>
>> >>> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
>> Viraj
>> >>> Jasani has accepted the PMC's invitation to become a committer on
>> Apache
>> >>> Phoenix.
>> >>>
>> >>> We appreciate all of the great contributions Viraj has made to the
>> >>> community thus far and we look forward to his continued involvement.
>> >>>
>> >>> Congratulations and welcome, Viraj!
>> >>>
>> >>
>>
>


Re: [ANNOUNCE] New VP Apache Phoenix

2022-07-21 Thread rajeshb...@apache.org
Thank you very much Ankit and the whole community for giving me this
opportunity.

Thanks,
Rajeshbabu.

On Wed, Jul 20, 2022 at 12:48 PM Viraj Jasani  wrote:

> Thank you Ankit, it was great to have you as our VP!!
> Congratulations Rajeshbabu, really glad to have you as our new VP!!
>
>
> On Mon, Jul 18, 2022 at 3:50 PM Ankit Singhal 
> wrote:
>
>> I am pleased to announce that we have a new PMC chair
>> and VP for Phoenix. ASF board approved the transition from
>> me to Rajeshbabu post an agreement by the Phoenix PMC.
>>
>> This is a part of our tradition we adopted last year to change
>> a VP every year, and we thank Rajeshbabu for volunteering
>> to serve in this role.
>>
>> Please join me in congratulating Rajeshbabu!
>>
>> Thank you all for the opportunity to serve you as the VP for
>> these last years.
>>
>> Regards,
>> Ankit Singhal
>>
>


[ANNOUNCE] Richard Antal joins Phoenix PMC

2022-09-29 Thread rajeshb...@apache.org
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richard
Antal
has accepted our invitation to join the PMC.

We appreciate all of the great contributions Richard has made to the
community thus far and we look forward to his continued involvement.

Please join me in congratulating Richard Antal!

Thanks,
Rajeshbabu.


Re: [ANNOUNCE] New Phoenix PMC member Tanuj Khurana

2023-01-03 Thread rajeshb...@apache.org
Congratulations Tanuj Khurana.

Thanks,
Rajeshbabu.

On Tue, Jan 3, 2023, 9:47 AM Viraj Jasani  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Tanuj
> Khurana has accepted the invitation to join the PMC.
>
> We appreciate all of the great contributions Tanuj has made to the
> community thus far and we look forward to his continued involvement.
>
> Please join me in congratulating Tanuj Khurana!
>


Re: Having Trouble with Phoenix + Hbase

2023-08-19 Thread rajeshb...@apache.org
This could be because phoenix-server jar missing in HBase classpath.

Could you please copy phoenix-server jar from phoenix into $HBASE_HOME/lib
path in all the nodes, restart HBase and start sqlline.

https://phoenix.apache.org/installation.html


Thanks,
Rajeshbabu.

On Sun, Aug 20, 2023, 4:34 AM Kal Stevens  wrote:

> I am new to setting up a cluster, and I feel like I am doing something
> dumb.
>
> I get the following error message when I run sqlline to create a table,
> and I am not sure what I am doing wrong.
>
> I know that in on the classpath for SQLLine, but I think it is making an
> RPC call to hbase. But I am not sure because I do not see anything
> incorrect about this in the hbase logs
>
> These are the arguments that I am using for SqlLine (I am doing this in
> the IDE to debug it not the command line)
>
> -d org.apache.phoenix.jdbc.PhoenixEmbeddedDriver -u
> jdbc:phoenix:zookeeper:2181:/hbase:/keytab -n none -p none --color=true
> --fastConnect=false --verbose=true --incremental=false
> --isolation=TRANSACTION_READ_COMMITTED
>
>
> I am not sure what /hbase:/keytab are supposed to be
>
> It is trying to connect to this host/port
>
> hbase/192.168.1.162:16000
>
>
> It seems to be trying to create the "SYSTEM.CATALOG"
>
>
>
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
> org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured
> region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for
> table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or
> table descriptor if you want to bypass sanity checks
> at
> org.apache.hadoop.hbase.util.TableDescriptorChecker.warnOrThrowExceptionForFailure(TableDescriptorChecker.java:339)
> at
> org.apache.hadoop.hbase.util.TableDescriptorChecker.checkClassLoading(TableDescriptorChecker.java:331)
> at
> org.apache.hadoop.hbase.util.TableDescriptorChecker.sanityCheck(TableDescriptorChecker.java:110)
> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2316)
> at
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:691)
> at
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
> at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
> Caused by: java.io.IOException: Unable to load configured region split
> policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
> 'SYSTEM.CATALOG'
> at
> org.apache.hadoop.hbase.regionserver.RegionSplitPolicy.getSplitPolicyClass(RegionSplitPolicy.java:122)
> at
> org.apache.hadoop.hbase.util.TableDescriptorChecker.checkClassLoading(TableDescriptorChecker.java:328)
> ... 8 more
> Caused by: java.lang.ClassNotFoundException:
> org.apache.phoenix.schema.MetaDataSplitPolicy
> at
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
> at
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527)
> at java.base/java.lang.Class.forName0(Native Method)
> at java.base/java.lang.Class.forName(Class.java:315)
> at
> org.apache.hadoop.hbase.regionserver.RegionSplitPolicy.getSplitPolicyClass(RegionSplitPolicy.java:118)
> ... 9 more
>


Re: [ANNOUNCE] Rushabh Shah as Phoenix Committer

2023-08-23 Thread rajeshb...@apache.org
Congratulations Rushabh!

Thanks,
Rajeshbabu.

On Thu, Aug 24, 2023, 4:35 AM Xinyi Yan  wrote:

> Congratulations Rushabh!
>
> Sent from my iPhone
>
> > On Aug 23, 2023, at 3:37 PM, Andrew Purtell 
> wrote:
> >
> > Congratulations and welcome, Rushabh!
> >
> >> On Aug 15, 2023, at 1:46 PM, Viraj Jasani  wrote:
> >>
> >> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Rushabh
> >> Shah has accepted the PMC's invitation to become a committer on Apache
> >> Phoenix.
> >>
> >> We appreciate all of the great contributions Rushabh has made to the
> >> community thus far and we look forward to his continued involvement.
> >>
> >> Congratulations and Welcome, Rushabh!
>


Re: [ANNOUNCE] Rushabh Shah as Phoenix Committer

2023-08-23 Thread rajeshb...@apache.org
Congratulations Rushabh!

Thanks,
Rajeshbabu.

On Thu, Aug 24, 2023, 5:37 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:

> Congratulations Rushabh!
>
> Thanks,
> Rajeshbabu.
>
> On Thu, Aug 24, 2023, 4:35 AM Xinyi Yan  wrote:
>
>> Congratulations Rushabh!
>>
>> Sent from my iPhone
>>
>> > On Aug 23, 2023, at 3:37 PM, Andrew Purtell 
>> wrote:
>> >
>> > Congratulations and welcome, Rushabh!
>> >
>> >> On Aug 15, 2023, at 1:46 PM, Viraj Jasani  wrote:
>> >>
>> >> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
>> Rushabh
>> >> Shah has accepted the PMC's invitation to become a committer on Apache
>> >> Phoenix.
>> >>
>> >> We appreciate all of the great contributions Rushabh has made to the
>> >> community thus far and we look forward to his continued involvement.
>> >>
>> >> Congratulations and Welcome, Rushabh!
>>
>


Re: [ANNOUNCE] New Phoenix PMC member Jacob Isaac

2023-11-06 Thread rajeshb...@apache.org
Congratulations Jacob Isaac!!

Thanks,
Rajeshbabu.

On Tue, Nov 7, 2023, 7:37 AM Viraj Jasani  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Jacob
> Isaac has accepted the invitation to join the PMC.
>
> We appreciate all of the great contributions Jacob has made to the
> community thus far and we look forward to his continued involvement.
>
> Please join me in congratulating Jacob Isaac!
>


[ANNOUNCE] Phoenix Thirdparty 2.1.0 released

2023-12-21 Thread rajeshb...@apache.org
Hello Everyone,

The Apache Phoenix team is pleased to announce the immediate availability
of the 2.1.0 release of the Phoenix Thirdparty project.

This project is used by the Apache Phoenix project to encapsulate a
number of core dependencies that Phoenix relies upon ensuring
that they are properly isolated from Phoenix downstream and upstream users.

The recent release has upgraded Guava to version 32.1.3-jre from the
previous 31.0.1-android version.

Initially, the 4.x branch maintained compatibility with Java 7,
necessitating the use of the Android variant of Guava. However, with the
end-of-life (EOL) status of the 4.x branch, the move to the standard JRE
version of Guava signifies a shift in compatibility standards.


Thanks,
Rajeshbabu
(on behalf of the Apache Phoenix team)

[1] https://phoenix.apache.org/download.html


Re: [ANNOUNCE] Lokesh Khurana as Phoenix Committer

2024-01-17 Thread rajeshb...@apache.org
Congratulations Lokesh!!

On Wed, Jan 17, 2024, 10:55 PM Viraj Jasani  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Lokesh
> Khurana has accepted the PMC's invitation to become a committer on Apache
> Phoenix.
>
> We appreciate all of the great contributions Lokesh has made to the
> community thus far and we look forward to his continued involvement.
>
> Congratulations and Welcome, Lokesh!
>


[ANNOUNCE] Phoenix Omid 1.1.1 released

2024-02-08 Thread rajeshb...@apache.org
Hello Everyone,

The Apache Phoenix team is pleased to announce the immediate availability
of the 1.1.1 release of the Phoenix Omid.

Highlights of this releases are:

- JDK 17 support
- TLS support
- Changed default commit store module to HBase from InMemory.
- Various dependency version updates.
- Improved default network interface logic

Download source from [1 ].

Thanks,
Rajeshbabu
(on behalf of the Apache Phoenix Team)

[1] https://phoenix.apache.org/download.html


[ANNOUNCE] Phoenix Omid 1.1.2 released

2024-03-26 Thread rajeshb...@apache.org
Hello Everyone,

The Apache Phoenix team is pleased to announce the immediate availability
of the 1.1.2 release of the Phoenix Omid.  Phoenix Omid (Optimistically
transaction Management In Datastores*)* is a flexible, reliable, high
performant and scalable transactional framework that allows Big Data
applications to execute ACID transactions on top of Phoenix Tables.

This is the patch release having below changes on top recently released
Phoenix Omid 1.1.1

- Make use of shaded Guava from phoenix-thirdparty and ban unrelocated
Guava usage.
- Bumped up HBase to 2.5 and Hadoop 3.2 latest releases.
- Change default waitStrategy to LOW_CPU from HIGH_THROUGHPUT.

Download source from [1 ].

Thanks,
Rajeshbabu
(on behalf of the Apache Phoenix Team)

[1] https://phoenix.apache.org/download.html


Re: [ANNOUNCE] Palash Chauhan as Phoenix Committer

2024-06-12 Thread rajeshb...@apache.org
Congratulations Palash!!

On Wed, Jun 12, 2024, 4:30 PM Kadir Ozdemir 
wrote:

> Congratulations Palash!
>
> On Tue, Jun 11, 2024 at 9:53 PM Viraj Jasani  wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Palash
> > Chauhan has accepted the PMC's invitation to become a committer on Apache
> > Phoenix.
> >
> > We appreciate all of the great contributions Palash has made to the
> > community thus far and we look forward to their continued involvement.
> >
> > Congratulations and Welcome, Palash!
> >
>