Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-27 Thread Kumar Palaniappan
You mean CDH5.9 and 5.10? And also HBASE 17587?

On Mon, Nov 27, 2017 at 12:37 AM, Pedro Boado  wrote:

> My branch is based on 4.x-HBase. But I wouldn't recommend using cdh4.9 &
> 4.10 as they include HBASE-16604 but not it's fix HBASE-17187 (issue found
> in PHOENIX-3736)
>
> Cheers.
>
> On 27 Nov 2017 08:28, "Kumar Palaniappan" 
> wrote:
>
>> Thanks @james.
>>
>> Since this patch(Phoenix-4372) is tightly coupled with CDH5.11.2, would
>> like to get the 4.13HBase1.2 branch released so can make it compatible with
>> CDH5.9.2 the version which our clusters are on and test.
>>
>> @Pedro, please let me know, how to get access to 4.13.1HBase1.2
>>
>> On Fri, Nov 24, 2017 at 10:08 AM, James Taylor 
>> wrote:
>>
>>> @Kumar - yes, we’ll do a 4.13.1 release shortly for HBase 1.2 out of the
>>> head of the 4.x-HBase-1.2 branch. Pedro is going to be the RM going forward
>>> for this branch and do CDH release from this. You can track that on
>>> PHOENIX-4372.
>>>
>>> @Flavio - Pedro is targeting a CDH 5.11.2 release.
>>>
>>> On Fri, Nov 24, 2017 at 8:53 AM Flavio Pompermaier 
>>> wrote:
>>>
>>>> Hi to all,
>>>> is there any Parcel available for Phoenix 4.13 and Cloudera CDH
>>>> 5.9-5.10 available (HBase 1.2) somewhere?
>>>>
>>>> Best,
>>>> Flavio
>>>>
>>>> On Thu, Nov 23, 2017 at 7:33 AM, Kumar Palaniappan <
>>>> kpalaniap...@marinsoftware.com> wrote:
>>>>
>>>>> @Jmaes, are you still planning to release 4.13HBase1.2?
>>>>>
>>>>> On Sun, Nov 19, 2017 at 1:21 PM, James Taylor 
>>>>> wrote:
>>>>>
>>>>>> Hi Kumar,
>>>>>> I started a discussion [1][2] on the dev list to find an RM for the
>>>>>> HBase 1.2 (and HBase 1.1) branch, but no one initially stepped up, so 
>>>>>> there
>>>>>> were no plans for a release. Subsequently we've heard from a few folks 
>>>>>> that
>>>>>> they needed it, and Pedro Boado volunteered to do CDH compatible release
>>>>>> (see PHOENIX-4372) which requires an up to date HBase 1.2 based release.
>>>>>>
>>>>>> So I've volunteered to do one more Phoenix 4.13.1 release for HBase
>>>>>> 1.2 and 1.1. I'm hoping you, Pedro and others that need 1.2 based 
>>>>>> releases
>>>>>> can volunteer to be the RM and do further releases.
>>>>>>
>>>>>> One thing is clear, though - folks need to be on the dev and user
>>>>>> lists so they can take place in DISCUSS threads.
>>>>>>
>>>>>> Thanks,
>>>>>> James
>>>>>>
>>>>>> [1] https://lists.apache.org/thread.html/5b8b44acb1d36087703
>>>>>> 09767c3cddecbc6484c29452fe6750d8e1516@%3Cdev.phoenix.apache.org%3E
>>>>>> [2] https://lists.apache.org/thread.html/70cffa798d5f21ef87b02e0
>>>>>> 7aeca8c7982b0b30251411b7be17fadf9@%3Cdev.phoenix.apache.org%3E
>>>>>>
>>>>>> On Sun, Nov 19, 2017 at 12:23 PM, Kumar Palaniappan <
>>>>>> kpalaniap...@marinsoftware.com> wrote:
>>>>>>
>>>>>>> Are there any plans to release Phoenix 4.13 compatible with HBase
>>>>>>> 1.2?
>>>>>>>
>>>>>>> On Sat, Nov 11, 2017 at 5:57 PM, James Taylor <
>>>>>>> jamestay...@apache.org> wrote:
>>>>>>>
>>>>>>>> The Apache Phoenix team is pleased to announce the immediate
>>>>>>>> availability of the 4.13.0 release. Apache Phoenix enables SQL-based 
>>>>>>>> OLTP
>>>>>>>> and operational analytics for Apache Hadoop using Apache HBase as its
>>>>>>>> backing store and providing integration with other projects in the 
>>>>>>>> Apache
>>>>>>>> ecosystem such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x 
>>>>>>>> releases
>>>>>>>> are compatible with HBase 0.98 and 1.3.
>>>>>>>>
>>>>>>>> Highlights of the release include:
>>>>>>>>
>>>>>>>> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG
>>>>>>>> when connecting [1]
>>>>>>>> * Numerous bug fixes around handling of row deletion [2]
>>>>>>>> * Improvements to statistics collection [3]
>>>>>>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>>>>>>>
>>>>>>>> Source and binary downloads are available here [5].
>>>>>>>>
>>>>>>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>>>>>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>>>>>> 20rowDeletion
>>>>>>>> [3] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>>>>>> 20statsCollection
>>>>>>>> [4] https://phoenix.apache.org/language/functions.html#colla
>>>>>>>> tion_key
>>>>>>>> [5] http://phoenix.apache.org/download.html
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Flavio Pompermaier
>>>> Development Department
>>>>
>>>> OKKAM S.r.l.
>>>> Tel. +(39) 0461 041809 <+39%200461%20041809>
>>>>
>>>
>>


Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-27 Thread Kumar Palaniappan
Thanks @james.

Since this patch(Phoenix-4372) is tightly coupled with CDH5.11.2, would
like to get the 4.13HBase1.2 branch released so can make it compatible with
CDH5.9.2 the version which our clusters are on and test.

@Pedro, please let me know, how to get access to 4.13.1HBase1.2

On Fri, Nov 24, 2017 at 10:08 AM, James Taylor 
wrote:

> @Kumar - yes, we’ll do a 4.13.1 release shortly for HBase 1.2 out of the
> head of the 4.x-HBase-1.2 branch. Pedro is going to be the RM going forward
> for this branch and do CDH release from this. You can track that on
> PHOENIX-4372.
>
> @Flavio - Pedro is targeting a CDH 5.11.2 release.
>
> On Fri, Nov 24, 2017 at 8:53 AM Flavio Pompermaier 
> wrote:
>
>> Hi to all,
>> is there any Parcel available for Phoenix 4.13 and Cloudera CDH 5.9-5.10
>> available (HBase 1.2) somewhere?
>>
>> Best,
>> Flavio
>>
>> On Thu, Nov 23, 2017 at 7:33 AM, Kumar Palaniappan <
>> kpalaniap...@marinsoftware.com> wrote:
>>
>>> @Jmaes, are you still planning to release 4.13HBase1.2?
>>>
>>> On Sun, Nov 19, 2017 at 1:21 PM, James Taylor 
>>> wrote:
>>>
>>>> Hi Kumar,
>>>> I started a discussion [1][2] on the dev list to find an RM for the
>>>> HBase 1.2 (and HBase 1.1) branch, but no one initially stepped up, so there
>>>> were no plans for a release. Subsequently we've heard from a few folks that
>>>> they needed it, and Pedro Boado volunteered to do CDH compatible release
>>>> (see PHOENIX-4372) which requires an up to date HBase 1.2 based release.
>>>>
>>>> So I've volunteered to do one more Phoenix 4.13.1 release for HBase 1.2
>>>> and 1.1. I'm hoping you, Pedro and others that need 1.2 based releases can
>>>> volunteer to be the RM and do further releases.
>>>>
>>>> One thing is clear, though - folks need to be on the dev and user lists
>>>> so they can take place in DISCUSS threads.
>>>>
>>>> Thanks,
>>>> James
>>>>
>>>> [1] https://lists.apache.org/thread.html/5b8b44acb1d3608770309767c3cdde
>>>> cbc6484c29452fe6750d8e1516@%3Cdev.phoenix.apache.org%3E
>>>> [2] https://lists.apache.org/thread.html/70cffa798d5f21ef87b02e07aeca8c
>>>> 7982b0b30251411b7be17fadf9@%3Cdev.phoenix.apache.org%3E
>>>>
>>>> On Sun, Nov 19, 2017 at 12:23 PM, Kumar Palaniappan <
>>>> kpalaniap...@marinsoftware.com> wrote:
>>>>
>>>>> Are there any plans to release Phoenix 4.13 compatible with HBase 1.2?
>>>>>
>>>>> On Sat, Nov 11, 2017 at 5:57 PM, James Taylor 
>>>>> wrote:
>>>>>
>>>>>> The Apache Phoenix team is pleased to announce the immediate
>>>>>> availability of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP
>>>>>> and operational analytics for Apache Hadoop using Apache HBase as its
>>>>>> backing store and providing integration with other projects in the Apache
>>>>>> ecosystem such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x 
>>>>>> releases
>>>>>> are compatible with HBase 0.98 and 1.3.
>>>>>>
>>>>>> Highlights of the release include:
>>>>>>
>>>>>> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG
>>>>>> when connecting [1]
>>>>>> * Numerous bug fixes around handling of row deletion [2]
>>>>>> * Improvements to statistics collection [3]
>>>>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>>>>>
>>>>>> Source and binary downloads are available here [5].
>>>>>>
>>>>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>>>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>>>> 20rowDeletion
>>>>>> [3] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>>>> 20statsCollection
>>>>>> [4] https://phoenix.apache.org/language/functions.html#collation_key
>>>>>> [5] http://phoenix.apache.org/download.html
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> Flavio Pompermaier
>> Development Department
>>
>> OKKAM S.r.l.
>> Tel. +(39) 0461 041809 <+39%200461%20041809>
>>
>


Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-22 Thread Kumar Palaniappan
@Jmaes, are you still planning to release 4.13HBase1.2?

On Sun, Nov 19, 2017 at 1:21 PM, James Taylor 
wrote:

> Hi Kumar,
> I started a discussion [1][2] on the dev list to find an RM for the HBase
> 1.2 (and HBase 1.1) branch, but no one initially stepped up, so there were
> no plans for a release. Subsequently we've heard from a few folks that they
> needed it, and Pedro Boado volunteered to do CDH compatible release
> (see PHOENIX-4372) which requires an up to date HBase 1.2 based release.
>
> So I've volunteered to do one more Phoenix 4.13.1 release for HBase 1.2
> and 1.1. I'm hoping you, Pedro and others that need 1.2 based releases can
> volunteer to be the RM and do further releases.
>
> One thing is clear, though - folks need to be on the dev and user lists so
> they can take place in DISCUSS threads.
>
> Thanks,
> James
>
> [1] https://lists.apache.org/thread.html/5b8b44acb1d3608770309767c3cdde
> cbc6484c29452fe6750d8e1516@%3Cdev.phoenix.apache.org%3E
> [2] https://lists.apache.org/thread.html/70cffa798d5f21ef87b02e07aeca8c
> 7982b0b30251411b7be17fadf9@%3Cdev.phoenix.apache.org%3E
>
> On Sun, Nov 19, 2017 at 12:23 PM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
>> Are there any plans to release Phoenix 4.13 compatible with HBase 1.2?
>>
>> On Sat, Nov 11, 2017 at 5:57 PM, James Taylor 
>> wrote:
>>
>>> The Apache Phoenix team is pleased to announce the immediate
>>> availability of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP
>>> and operational analytics for Apache Hadoop using Apache HBase as its
>>> backing store and providing integration with other projects in the Apache
>>> ecosystem such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x releases
>>> are compatible with HBase 0.98 and 1.3.
>>>
>>> Highlights of the release include:
>>>
>>> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG when
>>> connecting [1]
>>> * Numerous bug fixes around handling of row deletion [2]
>>> * Improvements to statistics collection [3]
>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>>
>>> Source and binary downloads are available here [5].
>>>
>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>> 20rowDeletion
>>> [3] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>> 20statsCollection
>>> [4] https://phoenix.apache.org/language/functions.html#collation_key
>>> [5] http://phoenix.apache.org/download.html
>>>
>>
>>
>


Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-20 Thread Kumar Palaniappan
It was in our local repo and is in our current production. CDH5.9.2 with
Phoenix4.10HBase1.2
I can make it available, if needed once I'm cleared from the mgmt.

My point was as I stated here
<https://issues.apache.org/jira/browse/PHOENIX-4247> , we started noticing
it after the upgrade. Not quiet sure where the issue is.


On Sun, Nov 19, 2017 at 5:29 PM, Pedro Boado  wrote:

> I would love to have a look at your port to CDH 5.9.2 . Is the source code
> available anywhere?
>
> The changes I made over version 4.x-HBase1.2 are minimum and "no vital",
> and all IT are passing. I don't have any reason to believe that CDH would
> behave differently to Apache HBase. If there was a zk connection leak in
> CDH I would also expect it in the Apache version.
>
>
>
> On 20 Nov 2017 00:32, "Kumar Palaniappan" 
> wrote:
>
>> Thanks Pedro.
>>
>> We did CDH5.9.2 compatible with Phoenix 4.10 n 4.12 but got into series
>> of issues, one of them is a ZK leak.
>>
>> Kumar Palaniappan <http://about.me/kumar.palaniappan>
>> <https://twitter.com/intent/follow?original_referer=https://twitter.com/about/resources/buttons®ion=follow_link&screen_name=megamda&source=followbutton&variant=2.0>
>>  [image: Description: Macintosh HD:Users:Kumarappan:Desktop:linkedin.gif]
>> <http://www.linkedin.com/in/kumarpalaniappan>
>>
>> On Nov 19, 2017, at 3:43 PM, Pedro Boado  wrote:
>>
>> As I have volunteered to keep a CDH compatible release for Phoenix and as
>> for now CDH 5.x is based on HBase 1.2 is of my interest keep releasing
>> Phoenix for HBase 1.2 . So I can keep doing further releases for HBase 1.2
>> as well, James.
>>
>> On 19 November 2017 at 23:36, Kumar Palaniappan <
>> kpalaniap...@marinsoftware.com> wrote:
>>
>>> Thanks James. Sure will look into it from my end n update.
>>>
>>> Currently we are focused on the ZK connection leak
>>> https://issues.apache.org/jira/browse/PHOENIX-4247
>>> https://issues.apache.org/jira/browse/PHOENIX-4319
>>>
>>> Kumar Palaniappan <http://about.me/kumar.palaniappan>
>>> <https://twitter.com/intent/follow?original_referer=https://twitter.com/about/resources/buttons®ion=follow_link&screen_name=megamda&source=followbutton&variant=2.0>
>>>  [image: Description: Macintosh
>>> HD:Users:Kumarappan:Desktop:linkedin.gif]
>>> <http://www.linkedin.com/in/kumarpalaniappan>
>>>
>>> On Nov 19, 2017, at 1:21 PM, James Taylor 
>>> wrote:
>>>
>>> Hi Kumar,
>>> I started a discussion [1][2] on the dev list to find an RM for the
>>> HBase 1.2 (and HBase 1.1) branch, but no one initially stepped up, so there
>>> were no plans for a release. Subsequently we've heard from a few folks that
>>> they needed it, and Pedro Boado volunteered to do CDH compatible release
>>> (see PHOENIX-4372) which requires an up to date HBase 1.2 based release.
>>>
>>> So I've volunteered to do one more Phoenix 4.13.1 release for HBase 1.2
>>> and 1.1. I'm hoping you, Pedro and others that need 1.2 based releases can
>>> volunteer to be the RM and do further releases.
>>>
>>> One thing is clear, though - folks need to be on the dev and user lists
>>> so they can take place in DISCUSS threads.
>>>
>>> Thanks,
>>> James
>>>
>>> [1] https://lists.apache.org/thread.html/5b8b44acb1d36087703
>>> 09767c3cddecbc6484c29452fe6750d8e1516@%3Cdev.phoenix.apache.org%3E
>>> [2] https://lists.apache.org/thread.html/70cffa798d5f21ef87b02e0
>>> 7aeca8c7982b0b30251411b7be17fadf9@%3Cdev.phoenix.apache.org%3E
>>>
>>> On Sun, Nov 19, 2017 at 12:23 PM, Kumar Palaniappan <
>>> kpalaniap...@marinsoftware.com> wrote:
>>>
>>>> Are there any plans to release Phoenix 4.13 compatible with HBase 1.2?
>>>>
>>>> On Sat, Nov 11, 2017 at 5:57 PM, James Taylor 
>>>> wrote:
>>>>
>>>>> The Apache Phoenix team is pleased to announce the immediate
>>>>> availability of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP
>>>>> and operational analytics for Apache Hadoop using Apache HBase as its
>>>>> backing store and providing integration with other projects in the Apache
>>>>> ecosystem such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x releases
>>>>> are compatible with HBase 0.98 and 1.3.
>>>>>
>>>>> Highlights of the release include:
>>>>>
>>>>> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG when
>>>>> connecting [1]
>>>>> * Numerous bug fixes around handling of row deletion [2]
>>>>> * Improvements to statistics collection [3]
>>>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>>>>
>>>>> Source and binary downloads are available here [5].
>>>>>
>>>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>>> 20rowDeletion
>>>>> [3] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
>>>>> 20statsCollection
>>>>> [4] https://phoenix.apache.org/language/functions.html#collation_key
>>>>> [5] http://phoenix.apache.org/download.html
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Un saludo.
>> Pedro Boado.
>>
>>


Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-19 Thread Kumar Palaniappan
Thanks Pedro.

We did CDH5.9.2 compatible with Phoenix 4.10 n 4.12 but got into series of 
issues, one of them is a ZK leak.

Kumar Palaniappan   

> On Nov 19, 2017, at 3:43 PM, Pedro Boado  wrote:
> 
> As I have volunteered to keep a CDH compatible release for Phoenix and as for 
> now CDH 5.x is based on HBase 1.2 is of my interest keep releasing Phoenix 
> for HBase 1.2 . So I can keep doing further releases for HBase 1.2 as well, 
> James. 
> 
>> On 19 November 2017 at 23:36, Kumar Palaniappan 
>>  wrote:
>> Thanks James. Sure will look into it from my end n update.
>>  
>> Currently we are focused on the ZK connection leak 
>> https://issues.apache.org/jira/browse/PHOENIX-4247
>> https://issues.apache.org/jira/browse/PHOENIX-4319
>> 
>> Kumar Palaniappan   
>> 
>>> On Nov 19, 2017, at 1:21 PM, James Taylor  wrote:
>>> 
>>> Hi Kumar,
>>> I started a discussion [1][2] on the dev list to find an RM for the HBase 
>>> 1.2 (and HBase 1.1) branch, but no one initially stepped up, so there were 
>>> no plans for a release. Subsequently we've heard from a few folks that they 
>>> needed it, and Pedro Boado volunteered to do CDH compatible release (see 
>>> PHOENIX-4372) which requires an up to date HBase 1.2 based release.
>>> 
>>> So I've volunteered to do one more Phoenix 4.13.1 release for HBase 1.2 and 
>>> 1.1. I'm hoping you, Pedro and others that need 1.2 based releases can 
>>> volunteer to be the RM and do further releases.
>>> 
>>> One thing is clear, though - folks need to be on the dev and user lists so 
>>> they can take place in DISCUSS threads.
>>> 
>>> Thanks,
>>> James
>>> 
>>> [1] 
>>> https://lists.apache.org/thread.html/5b8b44acb1d3608770309767c3cddecbc6484c29452fe6750d8e1516@%3Cdev.phoenix.apache.org%3E
>>> [2] 
>>> https://lists.apache.org/thread.html/70cffa798d5f21ef87b02e07aeca8c7982b0b30251411b7be17fadf9@%3Cdev.phoenix.apache.org%3E
>>> 
>>>> On Sun, Nov 19, 2017 at 12:23 PM, Kumar Palaniappan 
>>>>  wrote:
>>>> Are there any plans to release Phoenix 4.13 compatible with HBase 1.2?
>>>> 
>>>>> On Sat, Nov 11, 2017 at 5:57 PM, James Taylor  
>>>>> wrote:
>>>>> The Apache Phoenix team is pleased to announce the immediate availability 
>>>>> of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP and 
>>>>> operational analytics for Apache Hadoop using Apache HBase as its backing 
>>>>> store and providing integration with other projects in the Apache 
>>>>> ecosystem such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x 
>>>>> releases are compatible with HBase 0.98 and 1.3.
>>>>> 
>>>>> Highlights of the release include:
>>>>> 
>>>>> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG when 
>>>>> connecting [1]
>>>>> * Numerous bug fixes around handling of row deletion [2]
>>>>> * Improvements to statistics collection [3] 
>>>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>>>> 
>>>>> Source and binary downloads are available here [5].
>>>>> 
>>>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%20rowDeletion
>>>>> [3] 
>>>>> https://issues.apache.org/jira/issues/?jql=labels%20%3D%20statsCollection
>>>>> [4] https://phoenix.apache.org/language/functions.html#collation_key
>>>>> [5] http://phoenix.apache.org/download.html
>>>> 
>>> 
> 
> 
> 
> -- 
> Un saludo.
> Pedro Boado.


Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-19 Thread Kumar Palaniappan
Thanks James. Sure will look into it from my end n update.
 
Currently we are focused on the ZK connection leak 
https://issues.apache.org/jira/browse/PHOENIX-4247
https://issues.apache.org/jira/browse/PHOENIX-4319

Kumar Palaniappan   

> On Nov 19, 2017, at 1:21 PM, James Taylor  wrote:
> 
> Hi Kumar,
> I started a discussion [1][2] on the dev list to find an RM for the HBase 1.2 
> (and HBase 1.1) branch, but no one initially stepped up, so there were no 
> plans for a release. Subsequently we've heard from a few folks that they 
> needed it, and Pedro Boado volunteered to do CDH compatible release (see 
> PHOENIX-4372) which requires an up to date HBase 1.2 based release.
> 
> So I've volunteered to do one more Phoenix 4.13.1 release for HBase 1.2 and 
> 1.1. I'm hoping you, Pedro and others that need 1.2 based releases can 
> volunteer to be the RM and do further releases.
> 
> One thing is clear, though - folks need to be on the dev and user lists so 
> they can take place in DISCUSS threads.
> 
> Thanks,
> James
> 
> [1] 
> https://lists.apache.org/thread.html/5b8b44acb1d3608770309767c3cddecbc6484c29452fe6750d8e1516@%3Cdev.phoenix.apache.org%3E
> [2] 
> https://lists.apache.org/thread.html/70cffa798d5f21ef87b02e07aeca8c7982b0b30251411b7be17fadf9@%3Cdev.phoenix.apache.org%3E
> 
>> On Sun, Nov 19, 2017 at 12:23 PM, Kumar Palaniappan 
>>  wrote:
>> Are there any plans to release Phoenix 4.13 compatible with HBase 1.2?
>> 
>>> On Sat, Nov 11, 2017 at 5:57 PM, James Taylor  
>>> wrote:
>>> The Apache Phoenix team is pleased to announce the immediate availability 
>>> of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP and 
>>> operational analytics for Apache Hadoop using Apache HBase as its backing 
>>> store and providing integration with other projects in the Apache ecosystem 
>>> such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x releases are 
>>> compatible with HBase 0.98 and 1.3.
>>> 
>>> Highlights of the release include:
>>> 
>>> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG when 
>>> connecting [1]
>>> * Numerous bug fixes around handling of row deletion [2]
>>> * Improvements to statistics collection [3] 
>>> * New COLLATION_KEY built-in function for linguistic sort [4]
>>> 
>>> Source and binary downloads are available here [5].
>>> 
>>> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
>>> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%20rowDeletion
>>> [3] 
>>> https://issues.apache.org/jira/issues/?jql=labels%20%3D%20statsCollection
>>> [4] https://phoenix.apache.org/language/functions.html#collation_key
>>> [5] http://phoenix.apache.org/download.html
>> 
> 


Re: [ANNOUNCE] Apache Phoenix 4.13 released

2017-11-19 Thread Kumar Palaniappan
Are there any plans to release Phoenix 4.13 compatible with HBase 1.2?

On Sat, Nov 11, 2017 at 5:57 PM, James Taylor 
wrote:

> The Apache Phoenix team is pleased to announce the immediate availability
> of the 4.13.0 release. Apache Phoenix enables SQL-based OLTP and
> operational analytics for Apache Hadoop using Apache HBase as its backing
> store and providing integration with other projects in the Apache ecosystem
> such as Spark, Hive, Pig, Flume, and MapReduce. The 4.x releases are
> compatible with HBase 0.98 and 1.3.
>
> Highlights of the release include:
>
> * Critical bug fix to prevent snapshot creation of SYSTEM.CATALOG when
> connecting [1]
> * Numerous bug fixes around handling of row deletion [2]
> * Improvements to statistics collection [3]
> * New COLLATION_KEY built-in function for linguistic sort [4]
>
> Source and binary downloads are available here [5].
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-4335
> [2] https://issues.apache.org/jira/issues/?jql=labels%20%3D%20rowDeletion
> [3] https://issues.apache.org/jira/issues/?jql=labels%20%3D%
> 20statsCollection
> [4] https://phoenix.apache.org/language/functions.html#collation_key
> [5] http://phoenix.apache.org/download.html
>


Connection issue with Spark/Phoenix/ZK

2017-09-28 Thread Kumar Palaniappan
After upgrading to CDH 5.9.1/Phoenix 4.10/Spark 1.6 from CDH 5.5.2/Phoenix
4.6/Spark 1.5, streaming jobs that read data from Phoenix no longer release
their zookeeper connections, meaning that the number of connections from
the driver grow with each batch until the ZooKeeper limit on connections
per IP address is reached, at which point the Spark streaming job can no
longer read data from Phoenix.

Appreciate the pointers.


Phoenix Connection

2017-09-25 Thread Kumar Palaniappan
After we did upgrade to 4.10 on a CDH 5.10, we do see too many ZK
connections (increased the max connections form 60->300->1000), but the
problem still exist.

Are there any known issues? Phoenix connection leak?

Appreciate your time.


Re: Duplicate rows with the PK

2017-08-28 Thread Kumar Palaniappan
Deleting the table entries from system stats helped to get rid of duplicates.

Problem to be monitored during the high growth.

Kumar Palaniappan   

> On Aug 28, 2017, at 11:48 AM, James Taylor  wrote:
> 
> Is the table salted? If the SALT_BUCKETS property was ever changed during the 
> life of the table, this could lead to duplicates. You could get the same row 
> key with a different salt byte. If not salted, are there any global, mutable 
> secondary indexes. A number of issues have been fixed which will be in 4.12 
> release that lead to out-of-sync issues between the data and index table 
> under high load (in particular when the same set of rows are frequently 
> changing).
> 
> If you could file a JIRA with more information (Phoenix version, HBase 
> version, DDL, indexes, etc), that'd be much appreciated. We can discuss more 
> there.
> 
> Thanks,
> James
> 
> 
> 
>> On Mon, Aug 28, 2017 at 9:34 AM, Kumar Palaniappan 
>>  wrote:
>> For a weird reason, if we do scan on a table, with a non key, there are 
>> duplicate rows in the table.
>> 
>> If we do  look up with the complete RK of a table, it doesnt show the 
>> duplicates. 
>> 
>> What could be wrong? Appreciate your time.
>> 
>> Similar as this one -https://issues.apache.org/jira/browse/PHOENIX-3755
>> 
>> 
>> 
> 


Duplicate rows with the PK

2017-08-28 Thread Kumar Palaniappan
For a weird reason, if we do scan on a table, with a non key, there are
duplicate rows in the table.

If we do  look up with the complete RK of a table, it doesnt show the
duplicates.

What could be wrong? Appreciate your time.

Similar as this one -https://issues.apache.org/jira/browse/PHOENIX-3755


Re: Combining an RVC query and a filter on a datatype smaller than 8 bytes causes an Illegal Data Exception

2016-09-19 Thread Kumar Palaniappan
The problem is when we have just 1 param in the rvc it works.

but this one , for 2+

SELECT * FROM TEST.RVC_TEST WHERE (COLONE, COLTWO) IN ((1,2),(1,2)) AND
COLTHREE=3;

blows up.


On Mon, Sep 19, 2016 at 3:58 PM, Kumar Palaniappan <
kpalaniap...@marinsoftware.com> wrote:

> No, I didnt.
>
> But wrapping up with the parenthesis, it worked.
>
> SELECT * FROM TEST.RVC_TEST WHERE (COLONE, COLTWO) IN ((1,2)) AND
> COLTHREE=3;
>
> SELECT * FROM TEST.RVC_TEST WHERE ((COLONE, COLTWO) IN ((1,2)) AND
> (COLFOUR=4));
>
> On Mon, Sep 19, 2016 at 2:56 PM, Samarth Jain  wrote:
>
>> Kumar,
>>
>> Can you try with the 4.8 release?
>>
>>
>>
>> On Mon, Sep 19, 2016 at 2:54 PM, Kumar Palaniappan <
>> kpalaniap...@marinsoftware.com> wrote:
>>
>>>
>>> Any one had faced this issue?
>>>
>>> https://issues.apache.org/jira/browse/PHOENIX-3297
>>>
>>> And this one gives no rows
>>>
>>> SELECT * FROM TEST.RVC_TEST WHERE (COLONE, COLTWO) IN (1,2) AND COLTHREE
>>> =3 AND COLFOUR=4;
>>>
>>>
>>>
>>>
>>
>


Re: Combining an RVC query and a filter on a datatype smaller than 8 bytes causes an Illegal Data Exception

2016-09-19 Thread Kumar Palaniappan
No, I didnt.

But wrapping up with the parenthesis, it worked.

SELECT * FROM TEST.RVC_TEST WHERE (COLONE, COLTWO) IN ((1,2)) AND
COLTHREE=3;

SELECT * FROM TEST.RVC_TEST WHERE ((COLONE, COLTWO) IN ((1,2)) AND
(COLFOUR=4));

On Mon, Sep 19, 2016 at 2:56 PM, Samarth Jain  wrote:

> Kumar,
>
> Can you try with the 4.8 release?
>
>
>
> On Mon, Sep 19, 2016 at 2:54 PM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
>>
>> Any one had faced this issue?
>>
>> https://issues.apache.org/jira/browse/PHOENIX-3297
>>
>> And this one gives no rows
>>
>> SELECT * FROM TEST.RVC_TEST WHERE (COLONE, COLTWO) IN (1,2) AND COLTHREE
>> =3 AND COLFOUR=4;
>>
>>
>>
>>
>


Combining an RVC query and a filter on a datatype smaller than 8 bytes causes an Illegal Data Exception

2016-09-19 Thread Kumar Palaniappan
Any one had faced this issue?

https://issues.apache.org/jira/browse/PHOENIX-3297

And this one gives no rows

SELECT * FROM TEST.RVC_TEST WHERE (COLONE, COLTWO) IN (1,2) AND COLTHREE =3
AND COLFOUR=4;


Re: Cloning a table in Phoenix

2016-09-09 Thread Kumar Palaniappan
Will do James.

On Fri, Sep 9, 2016 at 10:27 AM, James Taylor 
wrote:

> Good idea - this would make a great contribution. Please file a JIRA.
>
> On Fri, Sep 9, 2016 at 6:29 AM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
>> Yes James.
>>
>> Kumar Palaniappan <http://about.me/kumar.palaniappan>
>> <https://twitter.com/intent/follow?original_referer=https://twitter.com/about/resources/buttons®ion=follow_link&screen_name=megamda&source=followbutton&variant=2.0>
>>  [image: Description: Macintosh HD:Users:Kumarappan:Desktop:linkedin.gif]
>> <http://www.linkedin.com/in/kumarpalaniappan>
>>
>> On Sep 9, 2016, at 12:53 AM, Heather, James (ELS-LON) <
>> james.heat...@elsevier.com> wrote:
>>
>> This does rather suggest that it would be fairly easy to implement a SHOW
>> CREATE TABLE statement. Is that right?
>>
>> It would be useful if so.
>>
>> James
>>
>> On 9 September 2016 2:43:51 a.m. "dalin.qin"  wrote:
>>
>>> Hi Kumar,
>>>
>>> I believe right now there is no way to directly generate the DDL
>>> statement for the existing table,better to write down you sql immedately
>>> after exection  (in oracle ,dbms_metadata is so perfect ,in hive show
>>> create table also works )
>>> however you can query system.catalog for all the information you need .
>>>
>>> ++--+-+-
>>> ---++---+--+
>>> +-+-+---
>>> +--+-++-+
>>> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |COLUMN_NAME |
>>> DATA_TYPE  | TYPE_NAME | COLUMN_SIZE  | BUFFER_LENGTH  |
>>> DECIMAL_DIGITS  | NUM_PREC_RADIX  | NULLABLE  | REMARKS  | COLUMN_DEF  |
>>> SQL_DATA_TYPE  | SQL_DAT |
>>> ++--+-+-
>>> ---++---+--+
>>> +-+-+---
>>> +--+-++-+
>>> || SYSTEM   | CATALOG | TENANT_ID  |
>>> 12 | VARCHAR   | null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | TABLE_SCHEM|
>>> 12 | VARCHAR   | null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | TABLE_NAME |
>>> 12 | VARCHAR   | null | null   | null
>>>  | null| 0 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | COLUMN_NAME|
>>> 12 | VARCHAR   | null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | COLUMN_FAMILY  |
>>> 12 | VARCHAR   | null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | TABLE_SEQ_NUM  |
>>> -5 | BIGINT| null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | TABLE_TYPE |
>>> 1  | CHAR  | 1| null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | PK_NAME|
>>> 12 | VARCHAR   | null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | COLUMN_COUNT   |
>>> 4  | INTEGER   | null | null   | null
>>>  | null| 1 |  | | null
>>>   | null|
>>> || SYSTEM   | CATALOG | SALT_BUCKETS   |
>>>

Re: Cloning a table in Phoenix

2016-09-09 Thread Kumar Palaniappan
Yes James.

Kumar Palaniappan   

> On Sep 9, 2016, at 12:53 AM, Heather, James (ELS-LON) 
>  wrote:
> 
> This does rather suggest that it would be fairly easy to implement a SHOW 
> CREATE TABLE statement. Is that right?
> 
> It would be useful if so.
> 
> James
> 
>> On 9 September 2016 2:43:51 a.m. "dalin.qin"  wrote:
>> 
>> Hi Kumar,
>> 
>> I believe right now there is no way to directly generate the DDL statement 
>> for the existing table,better to write down you sql immedately after 
>> exection  (in oracle ,dbms_metadata is so perfect ,in hive show create table 
>> also works )
>> however you can query system.catalog for all the information you need .
>> 
>> ++--+-+++---+--++-+-+---+--+-++-+
>> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |COLUMN_NAME | 
>> DATA_TYPE  | TYPE_NAME | COLUMN_SIZE  | BUFFER_LENGTH  | 
>> DECIMAL_DIGITS  | NUM_PREC_RADIX  | NULLABLE  | REMARKS  | COLUMN_DEF  | 
>> SQL_DATA_TYPE  | SQL_DAT |
>> ++--+-+++---+--++-+-+---+--+-++-+
>> || SYSTEM   | CATALOG | TENANT_ID  | 12  
>>| VARCHAR   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | TABLE_SCHEM| 12  
>>| VARCHAR   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | TABLE_NAME | 12  
>>| VARCHAR   | null | null   | null
>> | null| 0 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | COLUMN_NAME| 12  
>>| VARCHAR   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | COLUMN_FAMILY  | 12  
>>| VARCHAR   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | TABLE_SEQ_NUM  | -5  
>>| BIGINT| null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | TABLE_TYPE | 1   
>>| CHAR  | 1| null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | PK_NAME| 12  
>>| VARCHAR   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | COLUMN_COUNT   | 4   
>>| INTEGER   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | SALT_BUCKETS   | 4   
>>| INTEGER   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | DATA_TABLE_NAME| 12  
>>| VARCHAR   | null | null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | INDEX_STATE| 1   
>>| CHAR  | 1| null   | null
>> | null| 1 |  | | null   | 
>> null|
>> || SYSTEM   | CATALOG | IMMUTABLE_ROWS | 16  
>>| BOOLEAN  

Re: Cloning a table in Phoenix

2016-09-08 Thread Kumar Palaniappan
Yes, we found a way to do off of system.catalog

In the meantime, trying to to explore are there any off the
shelves options.

Thanks dalin.

Kumar Palaniappan   

> On Sep 8, 2016, at 6:43 PM, dalin.qin  wrote:
> 
> Hi Kumar,
> 
> I believe right now there is no way to directly generate the DDL statement 
> for the existing table,better to write down you sql immedately after exection 
>  (in oracle ,dbms_metadata is so perfect ,in hive show create table also 
> works )
> however you can query system.catalog for all the information you need .
> 
> ++--+-+++---+--++-+-+---+--+-++-+
> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |COLUMN_NAME | 
> DATA_TYPE  | TYPE_NAME | COLUMN_SIZE  | BUFFER_LENGTH  | 
> DECIMAL_DIGITS  | NUM_PREC_RADIX  | NULLABLE  | REMARKS  | COLUMN_DEF  | 
> SQL_DATA_TYPE  | SQL_DAT |
> ++--+-+++---+--++-+-+---+--+-++-+
> || SYSTEM   | CATALOG | TENANT_ID  | 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | TABLE_SCHEM| 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | TABLE_NAME | 12   
>   | VARCHAR   | null | null   | null| 
> null| 0 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | COLUMN_NAME| 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | COLUMN_FAMILY  | 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | TABLE_SEQ_NUM  | -5   
>   | BIGINT| null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | TABLE_TYPE | 1
>   | CHAR  | 1| null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | PK_NAME| 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | COLUMN_COUNT   | 4
>   | INTEGER   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | SALT_BUCKETS   | 4
>   | INTEGER   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | DATA_TABLE_NAME| 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | INDEX_STATE| 1
>   | CHAR  | 1| null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | IMMUTABLE_ROWS | 16   
>   | BOOLEAN   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> || SYSTEM   | CATALOG | VIEW_STATEMENT | 12   
>   | VARCHAR   | null | null   | null| 
> null| 1 |  | | null   | null  
>   |
> | 

Re: Cloning a table in Phoenix

2016-09-08 Thread Kumar Palaniappan
It's not about data. Would like to clone just the table structure(s) under the 
schema partially or entire tables.


Kumar Palaniappan   

> On Sep 8, 2016, at 5:48 PM, dalin.qin  wrote:
> 
> try this:
> 
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> CREATE TABLE TABLE1 (ID BIGINT 
> NOT NULL PRIMARY KEY, COL1 VARCHAR);
> No rows affected (1.287 seconds)
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> UPSERT INTO TABLE1 (ID, COL1) 
> VALUES (1, 'test_row_1');
> 1 row affected (0.105 seconds)
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> UPSERT INTO TABLE1 (ID, COL1) 
> VALUES (2, 'test_row_2');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure>  CREATE TABLE TABLE2 (ID 
> BIGINT NOT NULL PRIMARY KEY, COL1 VARCHAR);
> No rows affected (1.251 seconds)
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> upsert into table2 select * 
> from table1;
> 2 rows affected (0.049 seconds)
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> select * from table2;
> +-+-+
> | ID  |COL1 |
> +-+-+
> | 1   | test_row_1  |
> | 2   | test_row_2  |
> +-+-+
> 2 rows selected (0.06 seconds)
> 
> 
>> On Thu, Sep 8, 2016 at 4:17 PM, Kumar Palaniappan 
>>  wrote:
>> What is an easy solution or is there a solution to clone the table/schema in 
>> phoenix?
>> 
>> Thanks in advance.
> 


Cloning a table in Phoenix

2016-09-08 Thread Kumar Palaniappan
What is an easy solution or is there a solution to clone the table/schema
in phoenix?

Thanks in advance.


Disable Index

2016-02-04 Thread Kumar Palaniappan
While data migration, we simply drop the indices on the tables and
recreate. Would like to avoid.

Is there disable all index syntax in phoenix grammar? How do we disable an
index? If we disable index in phoenix and rebuild what does that translates
to phoenix intercepting WAL? We know rebuilding index is pretty much takes
same time as dropping/recreating the index. How does it work anyway will it
try to repair itself after enabling?

Best.
Kumar


Re: Announcing phoenix-for-cloudera 4.6.0

2016-01-28 Thread Kumar Palaniappan
Andrew, is it HBase 1.1?


https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.5

On Thu, Jan 28, 2016 at 6:51 PM, Andrew Purtell  wrote:

> I pushed a new branch for CDH 5.5 (5.5.1) as
> https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.5
>  and renamed the branch for CDH 5.4 to
> https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.4
>
> The changes in 4.6-HBase-1.0-cdh5.5 pass unit and integration tests for me
> (except a silly date test that hardcodes the expected year to 2015).
>
>
> On Thu, Jan 28, 2016 at 11:23 AM, Andrew Purtell 
> wrote:
>
>> Looking today
>>
>>
>> On Tue, Jan 26, 2016 at 11:00 PM, Kumar Palaniappan <
>> kpalaniap...@marinsoftware.com> wrote:
>>
>>> Andrew, any updates? Seem HBase-11544 impacted the Phoenix and CDH 5.5.1
>>> isnt working.
>>>
>>> On Sun, Jan 17, 2016 at 11:25 AM, Andrew Purtell <
>>> andrew.purt...@gmail.com> wrote:
>>>
>>>> This looks like something easy to fix up. Maybe I can get to it next
>>>> week.
>>>>
>>>> > On Jan 15, 2016, at 9:07 PM, Krishna  wrote:
>>>> >
>>>> > On the branch:  4.5-HBase-1.0-cdh5, I set cdh version to 5.5.1 in pom
>>>> and
>>>> > building the package produces following errors.
>>>> > Repo: https://github.com/chiastic-security/phoenix-for-cloudera
>>>> >
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/util/Tracing.java:[176,82]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   method getParentId()
>>>> > [ERROR] location: variable span of type org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[129,31]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   variable ROOT_SPAN_ID
>>>> > [ERROR] location: interface org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[159,38]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   variable ROOT_SPAN_ID
>>>> > [ERROR] location: interface org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[162,31]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   variable ROOT_SPAN_ID
>>>> > [ERROR] location: interface org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[337,38]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   variable ROOT_SPAN_ID
>>>> > [ERROR] location: interface org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[339,42]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   variable ROOT_SPAN_ID
>>>> > [ERROR] location: interface org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[359,58]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   variable ROOT_SPAN_ID
>>>> > [ERROR] location: interface org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceMetricSource.java:[99,74]
>>>> > cannot find symbol
>>>> > [ERROR] symbol:   method getParentId()
>>>> > [ERROR] location: variable span of type org.apache.htrace.Span
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceMetricSource.java:[110,60]
>>>> > incompatible types
>>>> > [ERROR] required: java.util.Map
>>>> > [ERROR] found:java.util.Map
>>>> > [ERROR]
>>>> >
>>>> ~/phoenix_related/phoenix-for-cloudera/phoenix-cor

Re: Announcing phoenix-for-cloudera 4.6.0

2016-01-26 Thread Kumar Palaniappan
Andrew, any updates? Seem HBase-11544 impacted the Phoenix and CDH 5.5.1
isnt working.

On Sun, Jan 17, 2016 at 11:25 AM, Andrew Purtell 
wrote:

> This looks like something easy to fix up. Maybe I can get to it next week.
>
> > On Jan 15, 2016, at 9:07 PM, Krishna  wrote:
> >
> > On the branch:  4.5-HBase-1.0-cdh5, I set cdh version to 5.5.1 in pom and
> > building the package produces following errors.
> > Repo: https://github.com/chiastic-security/phoenix-for-cloudera
> >
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/util/Tracing.java:[176,82]
> > cannot find symbol
> > [ERROR] symbol:   method getParentId()
> > [ERROR] location: variable span of type org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[129,31]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[159,38]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[162,31]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[337,38]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[339,42]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceReader.java:[359,58]
> > cannot find symbol
> > [ERROR] symbol:   variable ROOT_SPAN_ID
> > [ERROR] location: interface org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceMetricSource.java:[99,74]
> > cannot find symbol
> > [ERROR] symbol:   method getParentId()
> > [ERROR] location: variable span of type org.apache.htrace.Span
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/TraceMetricSource.java:[110,60]
> > incompatible types
> > [ERROR] required: java.util.Map
> > [ERROR] found:java.util.Map
> > [ERROR]
> >
> ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java:[550,57]
> >  > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$1> is not
> > abstract and does not override abstract method
> >
> nextRaw(java.util.List,org.apache.hadoop.hbase.regionserver.ScannerContext)
> > in org.apache.hadoop.hbase.regionserver.RegionScanner
> >
> >
> >> On Fri, Jan 15, 2016 at 6:20 PM, Krishna  wrote:
> >>
> >> Thanks Andrew. Are binaries available for CDH5.5.x?
> >>
> >> On Tue, Nov 3, 2015 at 9:10 AM, Andrew Purtell 
> >> wrote:
> >>
> >>> Today I pushed a new branch '4.6-HBase-1.0-cdh5' and the tag
> >>> 'v4.6.0-cdh5.4.5' (58fcfa6) to
> >>> https://github.com/chiastic-security/phoenix-for-cloudera. This is the
> >>> Phoenix 4.6.0 release, modified to build against CDH 5.4.5 and possibly
> >>> (but not tested) subsequent CDH releases.
> >>>
> >>> If you want release tarballs I built from this, get them here:
> >>>
> >>> Binaries
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz.asc
> >>> (signature)
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz.md5
> >>> (MD5 sum)
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-bin.tar.gz.sha
> >>> (SHA-1 sum)
> >>>
> >>>
> >>> Source
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz
> >>>
> >>>
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz.asc
> >>> (signature)
> >>>
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz.md5
> >>> (MD5 sum)
> >>>
> >>>
> >>>
> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.6.0-cdh5.4.5-src.tar.gz.sha
> >>> (SHA1-sum)
> >>>
> >>>
> >>> Signed with my code signing key D5365CCD.
> >>>
> >>> ​The source and these binaries incorporate changes from the Cloudera
> Labs
> >>> fork of Phoenix (https://github.com/cloudera-labs/phoenix), licensed
> >>> under the ASL v2, Neither the source or binary

Re: Having Global covered Index

2016-01-18 Thread Kumar Palaniappan
Thanks James.

Do we need an index on a pk column, since it's a last element in the rowkey, to 
speed up the query?   ( because of this, write won't be impacted on that table) 
we do leverage the call queue.

Kumar Palaniappan   

> On Jan 18, 2016, at 10:07 AM, James Taylor  wrote:
> 
> See https://phoenix.apache.org/secondary_indexing.html
> 
> Hints are not required unless you want Phoenix to join between the index and 
> data table because the index isn't fully covered and some of these non 
> covered columns are referenced in the query.
> 
> bq. Doesnt a single global covered index sufficient for both use cases?
> 
> That depends entirely on the requirements for your use case. If query 
> performance is "good enough" without the index(es), then you don't need them.
> 
>> On Mon, Jan 18, 2016 at 8:14 AM, Kumar Palaniappan 
>>  wrote:
>> 
>> 
>> We have a table
>> 
>> create table t1 ( a bigint not null, a1 bigint not null, b bigint not null, 
>> c varchar, d varchar pk_constraint "t1_pk" (a, a1,b))
>> 
>> 
>> create a global indices as -
>> 
>> create index id_t1 on t1 (b) include (a,a1,c,d) - this one is to speed up 
>> filtering on b since it is the last element in the row key. (I highly doubt 
>> we need this)
>> 
>> create index id_t1_c on t1 (c) include (a,d)  - to increase the perf on 
>> filtering the c
>> 
>> 
>> What are the impacts on these indices? does it really make any difference in 
>> Phoenix? do we need to enforce the hints to pick up the index on run time, 
>> if we have multiple indices on a single table?
>> 
>> Doesnt a single global covered index sufficient for both use cases?
> 


Having Global covered Index

2016-01-18 Thread Kumar Palaniappan
We have a table

*create table t1 ( a bigint not null, a1 bigint not null, b bigint not
null, c varchar, d varchar pk_constraint "t1_pk" (a, a1,b))*


create a global indices as -

*create index id_t1 on t1 (b) include (a,a1,c,d)* - this one is to speed up
filtering on b since it is the last element in the row key. (I highly doubt
we need this)

*create index id_t1_c on t1 (c) include (a,d)*  - to increase the perf on
filtering the c


What are the impacts on these indices? does it really make any difference
in Phoenix? do we need to enforce the hints to pick up the index on run
time, if we have multiple indices on a single table?

Doesnt a single global covered index sufficient for both use cases?


Re: array of BIGINT index

2016-01-06 Thread Kumar Palaniappan
Thanks James. We will look into it. we need to find a way to overcome the
full scan.

On Wed, Jan 6, 2016 at 9:26 AM, James Taylor  wrote:

> In that case, you'd need PHOENIX-1544 to be implemented.
>
> On Wed, Jan 6, 2016 at 8:52 AM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
>> Unfortunately changing the table is not an option for us at this time.
>>
>> On Tue, Jan 5, 2016 at 6:27 PM, James Taylor 
>> wrote:
>>
>>> If the "finding customers that have a particular account" is a common
>>> query, you might consider modifying your schema by pulling the account into
>>> an optional/nullable row key column, like this:
>>>
>>> CREATE TABLE T (CID VARCHAR NOT NULL, AID BIGINT, V1 VARCHAR, V2 VARCHAR
>>> CONSTRAINT pk PRIMARY KEY (CID,AID));
>>>
>>> Your non PK columns (V1 and V2 in this example) would only be set on the
>>> row where AID is null, but you'd have new rows for all accounts for a given
>>> customer, and these rows wouldn't have any other column values.
>>>
>>> Then you could create a secondary index on AID:
>>> CREATE INDEX IDX ON T(AID);
>>>
>>> and you'd be able to find all customers for a given account quickly.
>>>
>>> You could still efficiently iterate over the account of a given customer
>>> too:
>>> SELECT * FROM T WHERE CID=?
>>> but your application would need to know that the first row would be the
>>> customer row and the next rows would contain only the account IDs for that
>>> customer.
>>>
>>> On Tue, Jan 5, 2016 at 3:46 PM, Kumar Palaniappan <
>>> kpalaniap...@marinsoftware.com> wrote:
>>>
>>>> Thanks James for the response. Our use case is, that array holds all
>>>> the accounts for a particular customer. so the  table and query is
>>>>
>>>> CREATE TABLE T ( ID VARCHAR PRIMARY KEY, A BIGINT ARRAY);
>>>>
>>>> find by account is a use case -
>>>>
>>>> select  ID from table T where ANY (A);
>>>>
>>>> On Tue, Jan 5, 2016 at 3:34 PM, James Taylor 
>>>> wrote:
>>>>
>>>>> There is some limited indexing you can do on an array by creating a
>>>>> functional index for a particular array element. For example:
>>>>> CREATE TABLE T (K VARCHAR PRIMARY KEY, A BIGINT ARRAY);
>>>>> CREATE INDEX IDX ON T (A[3]);
>>>>>
>>>>> In this case, the following query would use the index:
>>>>> SELECT K FROM T WHERE A[3] = 5;
>>>>>
>>>>> Does this help for your usage?
>>>>>
>>>>> Thanks,
>>>>> James
>>>>>
>>>>> On Tue, Jan 5, 2016 at 2:51 PM, Kumar Palaniappan <
>>>>> kpalaniap...@marinsoftware.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> We have a table with a data type BIGINT[], Since phoenix doesnt
>>>>>> support to index this data type, our queries are doing a full table scan
>>>>>> when we have to do filtering on this field.
>>>>>>
>>>>>> What are the alternate approaches? Tried looking into Views, but nope.
>>>>>>
>>>>>> Appreciate your time.
>>>>>>
>>>>>> Kumar
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: array of BIGINT index

2016-01-06 Thread Kumar Palaniappan
Unfortunately changing the table is not an option for us at this time.

On Tue, Jan 5, 2016 at 6:27 PM, James Taylor  wrote:

> If the "finding customers that have a particular account" is a common
> query, you might consider modifying your schema by pulling the account into
> an optional/nullable row key column, like this:
>
> CREATE TABLE T (CID VARCHAR NOT NULL, AID BIGINT, V1 VARCHAR, V2 VARCHAR
> CONSTRAINT pk PRIMARY KEY (CID,AID));
>
> Your non PK columns (V1 and V2 in this example) would only be set on the
> row where AID is null, but you'd have new rows for all accounts for a given
> customer, and these rows wouldn't have any other column values.
>
> Then you could create a secondary index on AID:
> CREATE INDEX IDX ON T(AID);
>
> and you'd be able to find all customers for a given account quickly.
>
> You could still efficiently iterate over the account of a given customer
> too:
> SELECT * FROM T WHERE CID=?
> but your application would need to know that the first row would be the
> customer row and the next rows would contain only the account IDs for that
> customer.
>
> On Tue, Jan 5, 2016 at 3:46 PM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
>> Thanks James for the response. Our use case is, that array holds all the
>> accounts for a particular customer. so the  table and query is
>>
>> CREATE TABLE T ( ID VARCHAR PRIMARY KEY, A BIGINT ARRAY);
>>
>> find by account is a use case -
>>
>> select  ID from table T where ANY (A);
>>
>> On Tue, Jan 5, 2016 at 3:34 PM, James Taylor 
>> wrote:
>>
>>> There is some limited indexing you can do on an array by creating a
>>> functional index for a particular array element. For example:
>>> CREATE TABLE T (K VARCHAR PRIMARY KEY, A BIGINT ARRAY);
>>> CREATE INDEX IDX ON T (A[3]);
>>>
>>> In this case, the following query would use the index:
>>> SELECT K FROM T WHERE A[3] = 5;
>>>
>>> Does this help for your usage?
>>>
>>> Thanks,
>>> James
>>>
>>> On Tue, Jan 5, 2016 at 2:51 PM, Kumar Palaniappan <
>>> kpalaniap...@marinsoftware.com> wrote:
>>>
>>>>
>>>>
>>>> We have a table with a data type BIGINT[], Since phoenix doesnt support
>>>> to index this data type, our queries are doing a full table scan when we
>>>> have to do filtering on this field.
>>>>
>>>> What are the alternate approaches? Tried looking into Views, but nope.
>>>>
>>>> Appreciate your time.
>>>>
>>>> Kumar
>>>>
>>>
>>>
>>
>


Re: array of BIGINT index

2016-01-05 Thread Kumar Palaniappan
Thanks James for the response. Our use case is, that array holds all the
accounts for a particular customer. so the  table and query is

CREATE TABLE T ( ID VARCHAR PRIMARY KEY, A BIGINT ARRAY);

find by account is a use case -

select  ID from table T where ANY (A);

On Tue, Jan 5, 2016 at 3:34 PM, James Taylor  wrote:

> There is some limited indexing you can do on an array by creating a
> functional index for a particular array element. For example:
> CREATE TABLE T (K VARCHAR PRIMARY KEY, A BIGINT ARRAY);
> CREATE INDEX IDX ON T (A[3]);
>
> In this case, the following query would use the index:
> SELECT K FROM T WHERE A[3] = 5;
>
> Does this help for your usage?
>
> Thanks,
> James
>
> On Tue, Jan 5, 2016 at 2:51 PM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
>>
>>
>> We have a table with a data type BIGINT[], Since phoenix doesnt support
>> to index this data type, our queries are doing a full table scan when we
>> have to do filtering on this field.
>>
>> What are the alternate approaches? Tried looking into Views, but nope.
>>
>> Appreciate your time.
>>
>> Kumar
>>
>
>


array of BIGINT index

2016-01-05 Thread Kumar Palaniappan
We have a table with a data type BIGINT[], Since phoenix doesnt support to
index this data type, our queries are doing a full table scan when we have
to do filtering on this field.

What are the alternate approaches? Tried looking into Views, but nope.

Appreciate your time.

Kumar


Re: Help tuning for bursts of high traffic?

2015-12-04 Thread Kumar Palaniappan
Just realized that it wasnt to the user group. I sent you the dump.

We face the exact same issues.  Working on getting tracing (CDH distro
doesnt include tracing, so we need to build our own)

Secondly we had certain things what James had suggested(guide post, batch
commit, join queries, major compaction on that specific tables during
burst), but still the same.

When we bounce, things becomes normal.

On Fri, Dec 4, 2015 at 1:43 PM, Andrew Purtell 
wrote:

> Awesome, thanks.
>
> Zack - We can look at Kumar's first, thanks though. Sounds like the same
> problem at first blush.
>
>
> On Dec 4, 2015, at 1:27 PM, Kumar Palaniappan <
> kpalaniap...@marinsoftware.com> wrote:
>
> working on getting it. the problem is where to store the file...working
> with our IT :(
>
> It's on your way.
>
> On Fri, Dec 4, 2015 at 1:23 PM, Andrew Purtell 
> wrote:
>
>> Any chance of stack dumps from the debug servlet? Impossible to get
>> anywhere with 'pegged the CPU' otherwise. Thanks.
>>
>> On Dec 4, 2015, at 12:20 PM, Riesland, Zack 
>> wrote:
>>
>> James,
>>
>>
>>
>> 2 quick followups, for whatever they’re worth:
>>
>>
>>
>> 1 – There is nothing phoenix-related in /tmp
>>
>>
>>
>> 2 – I added a ton of logging, and played with the properties a bit, and I
>> think I see a pattern:
>>
>>
>>
>> Watching the logging and the system profiler side-by-side, I see that,
>> periodically – maybe every 60 or 90 seconds – all of my CPUs (there are 8
>> on this machine) go from mildly busy to almost totally pegged.
>>
>>
>>
>> They USUALLY stay pegged for 5-10 seconds, and then calm down.
>>
>>
>>
>> However, occasionally, they stay pegged for around a minute. When this
>> happens, I get the very slow queries. I added logic so that when I get a
>> very slow response (> 1 second), I pause for 30 seconds.
>>
>>
>>
>> This ‘fixes’ everything, in the sense that I’m usually able to get a
>> couple thousand good queries before the whole pattern repeats.
>>
>>
>>
>> For reference, there’s nothing external that should be causing those CPU
>> spikes, so I’m guessing that it’s maybe java GC (?) or perhaps something
>> that the phoenix client is doing ?
>>
>>
>>
>> Can you guess at what Phoenix might do periodically that would peg the
>> CPUs – and in such a way that a query has to wait as much as 2 minutes to
>> execute (I’m guessing from the pattern that it’s not actually the query
>> that is slow, but a very long between when it gets queued and when it
>> actually gets executed).
>>
>>
>>
>> Oh and the methods you mentioned aren’t in my version of PhoenixRuntime,
>> evidently. I’m on 4.2.2.something.
>>
>>
>>
>> Thanks for any further feedback you can provide on this. Hopefully the
>> conversation is helpful to the whole Phoenix community.
>>
>>
>>
>> *From:* Riesland, Zack
>> *Sent:* Friday, December 04, 2015 1:36 PM
>> *To:* user@phoenix.apache.org
>> *Cc:* geoff.hai...@sensus.com
>> *Subject:* RE: Help tuning for bursts of high traffic?
>>
>>
>>
>> Thanks, James
>>
>>
>>
>> I'll work on gathering more information.
>>
>>
>>
>> In the meantime, answers to a few of your questions inline below just
>> narrow the scope a bit:
>>
>>
>> --
>>
>> *From:* James Taylor [jamestay...@apache.org]
>> *Sent:* Friday, December 04, 2015 12:21 PM
>> *To:* user
>> *Subject:* Re: Help tuning for bursts of high traffic?
>>
>> Zack,
>>
>> Thanks for reporting this and for the detailed description. Here's a
>> bunch of questions and some things you can try in addition to what Andrew
>> suggested:
>>
>> 1) Is this reproducible in a test environment (perhaps through Pherf:
>> https://phoenix.apache.org/pherf.html) so you can experiment more?
>>
>> -Will check
>>
>>
>>
>> 2) Do you get a sense of whether the bottleneck is on the client or the
>> server? CPU, IO, or network? How many clients are you running and have you
>> tried increasing this? Do you think your network is saturated by the data
>> being returned?
>>
>> -I'm no expert on this. When I look at the HBase dashboard on Ambari,
>> everything looks good. When I look at the stats on the machine running the
>> java code, it also looks good. Certainly no bottleneck related to memory or
>> CPU

Re: Help tuning for bursts of high traffic?

2015-12-04 Thread Kumar Palaniappan
I'm in the same exact position as Zack described. Appreciate your feedback.

So far we tried the call queue n the handlers, nope. Planned to try off-heap 
cache.

Kumar Palaniappan   

> On Dec 4, 2015, at 6:45 AM, Riesland, Zack  wrote:
> 
> Thanks Satish,
>  
> To clarify: I’m not looking up single rows. I’m looking up the history of 
> each widget, which returns hundreds-to-thousands of results per widget (per 
> query).
>  
> Each query is a range scan, it’s just that I’m performing thousands of them.
>  
> From: Satish Iyengar [mailto:sat...@gmail.com] 
> Sent: Friday, December 04, 2015 9:43 AM
> To: user@phoenix.apache.org
> Subject: Re: Help tuning for bursts of high traffic?
>  
> Hi Zack,
>  
> Did you consider avoiding hitting hbase for every single row by doing that 
> step in an offline mode? I was thinking if you could have some kind of daily 
> export of hbase table and then use pig to perform join (co-group perhaps) to 
> do the same. Obviously this would work only when your hbase table is not 
> maintained by stream based system. Hbase is really good at range scans and 
> may not be ideal for single row (large number of).
>  
> Thanks,
> Satish
>  
>  
>  
>  
>  
> On Fri, Dec 4, 2015 at 9:09 AM, Riesland, Zack  
> wrote:
> SHORT EXPLANATION: a much higher percentage of queries to phoenix return 
> exceptionally slow after querying very heavily for several minutes.
>  
> LONGER EXPLANATION:
>  
> I’ve been using Pheonix for about a year as a data store for web-based 
> reporting tools and it works well.
>  
> Now, I’m trying to use the data in a different (much more request-intensive) 
> way and encountering some issues.
>  
> The scenario is basically this:
>  
> Daily, ingest very large CSV files with data for widgets.
>  
> Each input file has hundreds of rows of data for each widget, and tens of 
> thousands of unique widgets.
>  
> As a first step, I want to de-duplicate this data against my Phoenix-based DB 
> (I can’t rely on just upserting the data for de-dup because it will go 
> through several ETL steps before being stored into Phoenix/HBase).
>  
> So, per-widget, I perform a query against Phoenix (the table is keyed against 
> the unique widget ID + sample point). I get all the data for a given widget 
> id, within a certain period of time, and then I only ingest rows for that 
> widget that are new to me.
>  
> I’m doing this in Java in a single step: I loop through my input file and 
> perform one query per widget, using the same Connection object to Phoenix.
>  
> THE ISSUE:
>  
> What I’m finding is that for the first several thousand queries, I almost 
> always get a very fast (less than 10 ms) response (good).
>  
> But after 15-20 thousand queries, the response starts to get MUCH slower. 
> Some queries respond as expected, but many take as many as 2-3 minutes, 
> pushing the total time to prime the data structure into the 12-15 hour range, 
> when it would only take 2-3 hours if all the queries were fast.
>  
> The same exact queries, when run manually and not part of this bulk process, 
> return in the (expected) < 10 ms.
>  
> So it SEEMS like the burst of queries puts Phoenix into some sort of busy 
> state that causes it to respond far too slowly.
>  
> The connection properties I’m setting are:
>  
> Phoenix.query.timeoutMs: 9
> Phoenix.query.keepAliveMs: 9
> Phenix.query.threadPoolSize: 256
>  
> Our cluster is 9 (beefy) region servers and the table I’m referencing is 511 
> regions. We went through a lot of pain to get the data split extremely well, 
> and I don’t think Schema design is the issue here.
>  
> Can anyone help me understand how to make this better? Is there a better 
> approach I could take? A better set of configuration parameters? Is our 
> cluster just too small for this?
>  
>  
> Thanks!
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
> 
> 
>  
> --
> Satish Iyengar
> 
> "Anyone who has never made a mistake has never tried anything new."
> Albert Einstein


Re:Connection error with Phoenix 4.4

2015-06-21 Thread Kumar Palaniappan
Re-sending with the subject.

On Sun, Jun 21, 2015 at 3:51 PM, Kumar Palaniappan <
kpalaniap...@marinsoftware.com> wrote:

> I'm getting this when I use sqlline.
>
> I compiled phoenix4.4 with cdh5.4 along with code changes and pom changes.
>
> Any clue? Appreciate your time.
>
>
>
> Error: ERROR 103 (08004): Unable to establish connection.
> (state=08004,code=103)
>
> java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
>
> at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
>
> at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:289)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:171)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1881)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860)
>
> at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1860)
>
> at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
>
> at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:131)
>
> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
>
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>
> at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>
> at sqlline.Commands.connect(Commands.java:1064)
>
> at sqlline.Commands.connect(Commands.java:996)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:497)
>
> at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>
> at sqlline.SqlLine.dispatch(SqlLine.java:804)
>
> at sqlline.SqlLine.initArgs(SqlLine.java:588)
>
> at sqlline.SqlLine.begin(SqlLine.java:656)
>
> at sqlline.SqlLine.start(SqlLine.java:398)
>
> at sqlline.SqlLine.main(SqlLine.java:292)
>
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>
> at
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>
> at
> org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:414)
>
> at
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:323)
>
> at
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>
> at
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:286)
>
> ... 22 more
>
> Caused by: java.lang.reflect.InvocationTargetException
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>
> at
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>
> ... 27 more
>
> Caused by: java.lang.NoClassDefFoundError:
> org/jboss/netty/channel/ChannelFactory
>
> at java.lang.Class.forName0(Native Method)
>
> at java.lang.Class.forName(Class.java:348)
>
> at
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2051)
>
> at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2016)
>
> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2110)
>
> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2136)
>
> at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:643)
>
> ... 32 more
>
> Caused by: java.lang.ClassNotFoundException:
> org.jboss.netty.channel.ChannelFactory
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>
> ... 39 more
>
> sqlline version 1.1.8
>


[no subject]

2015-06-21 Thread Kumar Palaniappan
I'm getting this when I use sqlline.

I compiled phoenix4.4 with cdh5.4 along with code changes and pom changes.

Any clue? Appreciate your time.



Error: ERROR 103 (08004): Unable to establish connection.
(state=08004,code=103)

java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.

at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)

at
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:289)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:171)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1881)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860)

at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1860)

at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)

at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:131)

at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)

at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)

at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)

at sqlline.Commands.connect(Commands.java:1064)

at sqlline.Commands.connect(Commands.java:996)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)

at sqlline.SqlLine.dispatch(SqlLine.java:804)

at sqlline.SqlLine.initArgs(SqlLine.java:588)

at sqlline.SqlLine.begin(SqlLine.java:656)

at sqlline.SqlLine.start(SqlLine.java:398)

at sqlline.SqlLine.main(SqlLine.java:292)

Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException

at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)

at
org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:414)

at
org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:323)

at
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)

at
org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:286)

... 22 more

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:422)

at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)

... 27 more

Caused by: java.lang.NoClassDefFoundError:
org/jboss/netty/channel/ChannelFactory

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:348)

at
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2051)

at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2016)

at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2110)

at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2136)

at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:643)

... 32 more

Caused by: java.lang.ClassNotFoundException:
org.jboss.netty.channel.ChannelFactory

at java.net.URLClassLoader.findClass(URLClassLoader.java:381)

at java.lang.ClassLoader.loadClass(ClassLoader.java:424)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)

at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

... 39 more

sqlline version 1.1.8


Re: Re: Error using sqlline.py

2015-06-18 Thread Kumar Palaniappan
Baahu, did you change just the pom or the above said source code too?

On Fri, Jun 12, 2015 at 1:11 AM, Bahubali Jain  wrote:

> Thanks Sun...now its working.
> For others, below are the changes I did in the pom.xml
>   1.0.0-cdh5.4.2
>   2.6.0-cdh5.4.2
>   2.6.0-cdh5.4.2
>
> Thanks,
> Baahu
>
> On Thu, Jun 11, 2015 at 11:43 AM, Fulin Sun 
> wrote:
>
>> Hi, there
>>
>> If you are using CDH 5.4.x and try to integrate with Phoenix 4.4 , you
>> may want to recompile phoenix 4.4 as the below thread.
>>
>>
>> http://mail-archives.apache.org/mod_mbox/phoenix-user/201506.mbox/%3c2015060410170208509...@certusnet.com.cn%3E
>>
>>
>> Best,
>> Sun.
>>
>> --
>> --
>>
>> CertusNet
>>
>>
>> *From:* Bahubali Jain 
>> *Date:* 2015-06-11 12:11
>> *To:* user 
>> *Subject:* Re: Error using sqlline.py
>> I have CDH 5.4 and the problem seems to be due to some incompatibility as
>> per the below thread.
>>
>>
>> http://mail-archives.apache.org/mod_mbox/phoenix-user/201506.mbox/%3CCAAF1JdhQUXwDwOJm6e38jjiBo-Kkm1=igagdrzmw-fxyrwk...@mail.gmail.com%3E
>>
>> On Wed, Jun 10, 2015 at 10:33 PM, Bahubali Jain 
>> wrote:
>>
>>> I downloaded from the below link
>>> http://supergsego.com/apache/phoenix/phoenix-4.4.0-HBase-1.0/
>>>
>>> I have a single node install of hadoop and had restarted the hbase
>>> daemons.
>>> On Jun 10, 2015 10:22 PM, "Nick Dimiduk"  wrote:
>>>
 Just double-check: you're using the HBase-1.0 Phoenix build with HBase
 1.0? Did you replace the Phoenix jar with the new 4.4.0 bits on all HBase
 hosts and restart HBase daemons?

 -n

 On Wed, Jun 10, 2015 at 6:51 AM, Bahubali Jain 
 wrote:

> Hi,
> I am running into below issue while trying to connect using
> sqlline.py, can you please shed some light on this.
>
> Hbase Version is 1.0 and Phoeniix version is 4.4
>
>
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
> further details.
> 15/06/10 06:43:36 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes 
> where
> applicable
> 15/06/10 06:43:36 WARN impl.MetricsConfig: Cannot locate
> configuration: tried
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> 15/06/10 06:43:38 WARN ipc.CoprocessorRpcChannel: Call failed on
> IOException
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG:
> org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
> at
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1148)
> at
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:10515)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7054)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1741)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1723)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:925)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1001)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1097)
> ... 10 more
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at
> org.apache.hadoop.ipc.Remo

JPA-Phoenix/HBase

2015-03-18 Thread Kumar Palaniappan
We are exploring using a JPA implementation to persist objects into HBase
using Phoenix SQL. Curious to know if anyone is exploring the same.
Appreciate the sync-up.