[GitHub] phoenix pull request: PHOENIX-1722 Speedup CONVERT_TZ function

2015-03-25 Thread gabrielreid
Github user gabrielreid commented on the pull request:

https://github.com/apache/phoenix/pull/42#issuecomment-85894438
  
I don't think it's the same issue in TIMEZONE_OFFSET because the timezone 
cache isn't static, so (as far as I know) it will only be accessed by a single 
thread. That being said, I think it makes sense to make the same change there 
as well, if you're up for doing that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379416#comment-14379416
 ] 

ASF GitHub Bot commented on PHOENIX-1722:
-

Github user gabrielreid commented on the pull request:

https://github.com/apache/phoenix/pull/42#issuecomment-85894438
  
I don't think it's the same issue in TIMEZONE_OFFSET because the timezone 
cache isn't static, so (as far as I know) it will only be accessed by a single 
thread. That being said, I think it makes sense to make the same change there 
as well, if you're up for doing that.


> Speedup CONVERT_TZ function
> ---
>
> Key: PHOENIX-1722
> URL: https://issues.apache.org/jira/browse/PHOENIX-1722
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vaclav Loffelmann
>Assignee: Vaclav Loffelmann
>Priority: Minor
>
> We have use case sensitive to performance of this function and I'd like to 
> benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1722 Speedup CONVERT_TZ function

2015-03-25 Thread tzolkincz
Github user tzolkincz commented on the pull request:

https://github.com/apache/phoenix/pull/42#issuecomment-85911379
  
Yes, it should be thread safe but we could consider to use same lib for 
timezones and same caching mechanism.  So if you are for duplication of 
[caching](https://github.com/apache/phoenix/pull/42/files#diff-83862191fb126a769d5fd74be3534d90R98)
 code, I'll do that. It is small fraction of code, but I'd rather make it more 
abstract. However it's not major question here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379474#comment-14379474
 ] 

ASF GitHub Bot commented on PHOENIX-1722:
-

Github user tzolkincz commented on the pull request:

https://github.com/apache/phoenix/pull/42#issuecomment-85911379
  
Yes, it should be thread safe but we could consider to use same lib for 
timezones and same caching mechanism.  So if you are for duplication of 
[caching](https://github.com/apache/phoenix/pull/42/files#diff-83862191fb126a769d5fd74be3534d90R98)
 code, I'll do that. It is small fraction of code, but I'd rather make it more 
abstract. However it's not major question here.


> Speedup CONVERT_TZ function
> ---
>
> Key: PHOENIX-1722
> URL: https://issues.apache.org/jira/browse/PHOENIX-1722
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vaclav Loffelmann
>Assignee: Vaclav Loffelmann
>Priority: Minor
>
> We have use case sensitive to performance of this function and I'd like to 
> benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1287) Use the joni byte[] regex engine in place of j.u.regex

2015-03-25 Thread Shuxiong Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379505#comment-14379505
 ] 

Shuxiong Ye commented on PHOENIX-1287:
--

OK.

1. I get some trouble in performance test. It needs some time to set up the 
environment.

2. I go through RegexpSplitFunction, but find it difficult to implement, too. I 
will work on it after the performance test.

> Use the joni byte[] regex engine in place of j.u.regex
> --
>
> Key: PHOENIX-1287
> URL: https://issues.apache.org/jira/browse/PHOENIX-1287
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Shuxiong Ye
>  Labels: gsoc2015
>
> See HBASE-11907. We'd get a 2x perf benefit plus it's driven off of byte[] 
> instead of strings.Thanks for the pointer, [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-25 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379514#comment-14379514
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi All,

What will be the most preferred way you would like for me to integrate the 
visualization techniques to the Phoenix system?
ie. If it had a web ui for example I'd be able to write a pluggin / addon and 
extend it to visualize the required data set.

Thanks,

Regards,
Nishani.
http://ayolajayamaha.blogspot.com

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-25 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1118:
--
Comment: was deleted

(was: Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com)

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-25 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1118:
--
Comment: was deleted

(was: Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com)

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1722 Speedup CONVERT_TZ function

2015-03-25 Thread gabrielreid
Github user gabrielreid commented on the pull request:

https://github.com/apache/phoenix/pull/42#issuecomment-85937432
  
Making it more abstract and not having duplication of code definitely 
sounds like the way to go, if you're up for doing that it would be great.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379521#comment-14379521
 ] 

ASF GitHub Bot commented on PHOENIX-1722:
-

Github user gabrielreid commented on the pull request:

https://github.com/apache/phoenix/pull/42#issuecomment-85937432
  
Making it more abstract and not having duplication of code definitely 
sounds like the way to go, if you're up for doing that it would be great.


> Speedup CONVERT_TZ function
> ---
>
> Key: PHOENIX-1722
> URL: https://issues.apache.org/jira/browse/PHOENIX-1722
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vaclav Loffelmann
>Assignee: Vaclav Loffelmann
>Priority: Minor
>
> We have use case sensitive to performance of this function and I'd like to 
> benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


how to add UDF

2015-03-25 Thread 孟庆义(孟庆义)
Hi, Dears:

 

  Is there any guide doc that showing how to add an UDF?

  And does that will cause re-deploy the server side jar?

 

Dainel meng

 



Re: how to add UDF

2015-03-25 Thread Ravi Kiran
Hi Dainel,

   You can follow the progress of it at
https://issues.apache.org/jira/browse/PHOENIX-538.

Regards
Ravi

On Wed, Mar 25, 2015 at 2:33 AM, 孟庆义(孟庆义) 
wrote:

> Hi, Dears:
>
>
>
>   Is there any guide doc that showing how to add an UDF?
>
>   And does that will cause re-deploy the server side jar?
>
>
>
> Dainel meng
>
>
>
>


[jira] [Commented] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-03-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380168#comment-14380168
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1705:
-

Reg the member variables, I was the one who gave him the comment. Sorry about 
that. i think James point is valid here.

> implement ARRAY_APPEND built in function
> 
>
> Key: PHOENIX-1705
> URL: https://issues.apache.org/jira/browse/PHOENIX-1705
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function2.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function3.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function4.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function5.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function6.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function7.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380177#comment-14380177
 ] 

Gabriel Reid commented on PHOENIX-1770:
---

Patch looks pretty good -- one thing I'm thinking that would be good is if 
*all* the calls to {{subprocess.call}} in performance.py get checked, instead 
of only the last one.

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-03-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380168#comment-14380168
 ] 

ramkrishna.s.vasudevan edited comment on PHOENIX-1705 at 3/25/15 4:31 PM:
--

Reg the member variables, I was the one who gave him the comment. Sorry about 
that. I think James point is valid here.


was (Author: ram_krish):
Reg the member variables, I was the one who gave him the comment. Sorry about 
that. i think James point is valid here.

> implement ARRAY_APPEND built in function
> 
>
> Key: PHOENIX-1705
> URL: https://issues.apache.org/jira/browse/PHOENIX-1705
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function2.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function3.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function4.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function5.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function6.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function7.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1775) [build] implement style checks for java and python

2015-03-25 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created PHOENIX-1775:
-

 Summary: [build] implement style checks for java and python
 Key: PHOENIX-1775
 URL: https://issues.apache.org/jira/browse/PHOENIX-1775
 Project: Phoenix
  Issue Type: Task
Reporter: Nick Dimiduk


It would be good to clean up our house a little. Having a uniform codebase 
helps on boarding new developers, so it would be nice to get something in 
before the new GSoC folks start really digging into patches.

[check-style|https://maven.apache.org/plugins/maven-checkstyle-plugin/] is an 
obvious choice for Java code. There's a maven plugin and plenty of examples of 
its use. We depend on Python for the users scripts, so we should add a pep8 
verification step too. On a quick search I don't see any maven plugins, so an 
exec step combined with [pytest-pep8|https://pypi.python.org/pypi/pytest-pep8] 
should do the trick.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Mark Tse (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380211#comment-14380211
 ] 

Mark Tse commented on PHOENIX-1770:
---

Makes sense. I'm thinking return on first error?

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380305#comment-14380305
 ] 

Thomas D'Silva commented on PHOENIX-1457:
-

[~jamestaylor] I think this patch is complete. 
[~jesse_yates] Do you mind reviewing the pull request when you get a chance, 
just to be sure I haven't missed anything?

> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [IMPORTANT] Some changes to branches and releases for 4.4+

2015-03-25 Thread Enis Söztutar
Ok great. I'll continue with the plan then. I'll send an update again for
devs to notify about the end state as not every one might be following
closely.

Enis

On Tue, Mar 24, 2015 at 11:13 PM, James Taylor 
wrote:

> We can actually just set the pom version and version in MetaDataProtocol
> to 4.4.0 now if we want.
>
>
> On Tuesday, March 24, 2015, James Taylor  wrote:
>
>> True, good point. We can revert those right after we branch in prep for a
>> 4.4 release on 1.0.
>>
>> On Tuesday, March 24, 2015, Enis Söztutar  wrote:
>>
>>>
>>> On Tue, Mar 24, 2015 at 11:02 PM, James Taylor 
>>> wrote:
>>>
 The master branch already includes PHOENIX-1642, so we just keep it
 there.  No need to revert anything or cherry-pick anything. Every
 commit being done to 4.x-HBase-1.x is being done for master (that's
 why it's just wasted overhead until it's needed).

>>>
>>> Like the pom.xml version, and
>>> https://issues.apache.org/jira/browse/PHOENIX-1766 have to be reverted
>>> if we fork the 4.x-HBase-1.0 branch from master, no?
>>>
>>>

 Your plan sounds fine, except this step isn't necessary (and no revert
 of anything currently in master is necessary):
  - After we fork 4.x-HBase-1.0, we cherry-pick PHOENIX-1642.

 Thanks,
 James

 On Tue, Mar 24, 2015 at 10:53 PM, Enis Söztutar 
 wrote:
 > On Tue, Mar 24, 2015 at 5:09 PM, James Taylor >>> >
 > wrote:
 >
 >> I'm fine with a 4.4 release for HBase 1.0, but it depends on demand -
 >> do our users need this?
 >
 >
 > I think so. HBase-1.0 usage is picking up, and we already saw users
 asking
 > for it. Though as usual, everything depends on whether there is enough
 > bandwidth to do the actual work (in terms of release, testing,
 porting,
 > etc).
 >
 >
 >> I think doing that out of master will work and
 >> we can create a branch for the release like we do with our other
 >> releases.
 >>
 >> When sub tasks of PHOENIX-1501 are ready, I think we'd want to put
 >> those in master and prior to that we'll need to create a
 >> 4.x-HBase-1.0. So we'll save the overhead of maintaining duplicate
 >> branches until that point.
 >>
 >> Make sense?
 >>
 >
 > Forking 4.4 release for HBase-1.0 from master seems strange. We have
 to
 > revert back the version, and make sure that it is really identical to
 the
 > 4.x-HBase-0.98 branch except for PHOENIX-1642. However, if you think
 that
 > an extra branch is really a lot overhead, maybe we can do this:
 >  - Delete 4.x-HBase-1.x now.
 >  - Keep 4.x-HBase-0.98 and master branches.
 >  - Fork 4.x-HBase-1.0 branch when whichever of these happens earlier:
 > -- 4.4 is about to be released
 > -- PHOENIX-1501, or PHOENIX-1681 or PHOENIX-1763 needs to be
 committed.
 >  - After we fork 4.x-HBase-1.0, we cherry-pick PHOENIX-1642.
 >  - When PHOENIX-1501/PHOENIX-1681 and PHOENIX-1763 is ready and
 HBase-1.1.0
 > is released we can fork 1.1 branch.
 >
 > Will that work? I am up to doing the work as long as we have a plan.
 >
 > Enis
 >
 >
 >>
 >> On Tue, Mar 24, 2015 at 4:50 PM, Enis Söztutar 
 wrote:
 >> >>
 >> >> We've been putting stuff on feature branches that need more time.
 When
 >> >> PHOENIX-1681 or other sub tasks of PHOENIX-1501 are ready (after
 >> >> HBASE-12972 is in), we'll need a branch specific to HBase 1.1.
 Until
 >> >> then, I think it's just unneeded overhead.
 >> >
 >> >
 >> > That is why the branch for 1.1 is not created yet. The current
 branch
 >> > 4.x-HBase-1.x supports ONLY HBase-1.0 release, not 1.1 release. I
 had
 >> named
 >> > the branch 1.x hoping that it will support both, but it seem that
 we
 >> cannot
 >> > do this. Should we rename the branch to 4.x-HBase-1.0 so that it is
 >> > explicit? I am assuming that we are still interested in making a
 Phoenix
 >> > release (4.4) which will support HBase-1.0.x series. If that is
 not the
 >> case
 >> > and we only want to support 1.1, we can make the decision at the
 expense
 >> of
 >> > users who want to run Phoenix with HBase 1.0 releases.
 >> >
 >> > HBASE-12972 will not be committed to HBase-1.0.x releases, only to
 1.1.x
 >> > releases, which (with the other changes like HBASE-11544) requires
 yet
 >> > another branch or release vehicle for HBase-1.1 support, different
 than
 >> 1.0
 >> > support.
 >> >
 >> >>
 >> >>
 >> >> >>
 >> >> >> >
 >> >> >> > On Tuesday, March 24, 2015, Enis Söztutar <
 enis@gmail.com
 >> >> >> > > wrote:
 >> >> >> >
 >> >> >> >> You mean get rid of 4.x-HBase-1.x branch? It is already
 created
 >> and
 >> >> >> >> has
 >> >> >> >> PHOENIX-1642. It builds with 

[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27151502
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/PhoenixRpcControllerFactory.java
 ---
@@ -26,52 +26,57 @@
 import org.apache.hadoop.hbase.ipc.DelegatingPayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-import org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.phoenix.util.SchemaUtil;
 
 /**
  * {@link RpcControllerFactory} that overrides the standard {@link 
PayloadCarryingRpcController} to
  * allow the configured index tables (via {@link #INDEX_TABLE_NAMES_KEY}) 
to use the Index priority.
  */
-public class IndexQosRpcControllerFactory extends RpcControllerFactory {
+public class PhoenixRpcControllerFactory extends RpcControllerFactory {
 
 public static final String INDEX_TABLE_NAMES_KEY = 
"phoenix.index.rpc.controller.index-tables";
 
-public IndexQosRpcControllerFactory(Configuration conf) {
+public PhoenixRpcControllerFactory(Configuration conf) {
 super(conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController() {
 PayloadCarryingRpcController delegate = super.newController();
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(CellScanner 
cellScanner) {
 PayloadCarryingRpcController delegate = 
super.newController(cellScanner);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(List 
cellIterables) {
 PayloadCarryingRpcController delegate = 
super.newController(cellIterables);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
-private class IndexQosRpcController extends 
DelegatingPayloadCarryingRpcController {
+private class PhoenixRpcController extends 
DelegatingPayloadCarryingRpcController {
 
-private int priority;
+private int indexPriority;
+private int metadataPriority;
 
-public IndexQosRpcController(PayloadCarryingRpcController 
delegate, Configuration conf) {
+public PhoenixRpcController(PayloadCarryingRpcController delegate, 
Configuration conf) {
 super(delegate);
-this.priority = 
PhoenixIndexRpcSchedulerFactory.getMinPriority(conf);
+this.indexPriority = 
PhoenixRpcSchedulerFactory.getIndexMinPriority(conf);
+this.metadataPriority = 
PhoenixRpcSchedulerFactory.getMetadataMinPriority(conf);
 }
 @Override
 public void setPriority(final TableName tn) {
-// if its an index table, then we override to the index 
priority
-if (!tn.isSystemTable() &&  
!SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
-setPriority(this.priority);
+// this is function is called for hbase system tables, phoenix 
system tables and index tables 
+if (SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
--- End diff --

Minor nit: how about moving SchemaUtil.isSystemDataTable() here and calling 
it useHighPriorityQueue() instead as I don't think it'd be called elsewhere 
(since it's somewhat random subset of our system tables).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380442#comment-14380442
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27151502
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/PhoenixRpcControllerFactory.java
 ---
@@ -26,52 +26,57 @@
 import org.apache.hadoop.hbase.ipc.DelegatingPayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-import org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.phoenix.util.SchemaUtil;
 
 /**
  * {@link RpcControllerFactory} that overrides the standard {@link 
PayloadCarryingRpcController} to
  * allow the configured index tables (via {@link #INDEX_TABLE_NAMES_KEY}) 
to use the Index priority.
  */
-public class IndexQosRpcControllerFactory extends RpcControllerFactory {
+public class PhoenixRpcControllerFactory extends RpcControllerFactory {
 
 public static final String INDEX_TABLE_NAMES_KEY = 
"phoenix.index.rpc.controller.index-tables";
 
-public IndexQosRpcControllerFactory(Configuration conf) {
+public PhoenixRpcControllerFactory(Configuration conf) {
 super(conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController() {
 PayloadCarryingRpcController delegate = super.newController();
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(CellScanner 
cellScanner) {
 PayloadCarryingRpcController delegate = 
super.newController(cellScanner);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(List 
cellIterables) {
 PayloadCarryingRpcController delegate = 
super.newController(cellIterables);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
-private class IndexQosRpcController extends 
DelegatingPayloadCarryingRpcController {
+private class PhoenixRpcController extends 
DelegatingPayloadCarryingRpcController {
 
-private int priority;
+private int indexPriority;
+private int metadataPriority;
 
-public IndexQosRpcController(PayloadCarryingRpcController 
delegate, Configuration conf) {
+public PhoenixRpcController(PayloadCarryingRpcController delegate, 
Configuration conf) {
 super(delegate);
-this.priority = 
PhoenixIndexRpcSchedulerFactory.getMinPriority(conf);
+this.indexPriority = 
PhoenixRpcSchedulerFactory.getIndexMinPriority(conf);
+this.metadataPriority = 
PhoenixRpcSchedulerFactory.getMetadataMinPriority(conf);
 }
 @Override
 public void setPriority(final TableName tn) {
-// if its an index table, then we override to the index 
priority
-if (!tn.isSystemTable() &&  
!SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
-setPriority(this.priority);
+// this is function is called for hbase system tables, phoenix 
system tables and index tables 
+if (SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
--- End diff --

Minor nit: how about moving SchemaUtil.isSystemDataTable() here and calling 
it useHighPriorityQueue() instead as I don't think it'd be called elsewhere 
(since it's somewhat random subset of our system tables).


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27151696
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

Do we really need a range? Might be simpler if we just had a single config 
attribute for "SYSTEM_RPC_PRIORITY".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380447#comment-14380447
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27151696
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

Do we really need a range? Might be simpler if we just had a single config 
attribute for "SYSTEM_RPC_PRIORITY".


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread twdsilva
Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27151905
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/PhoenixRpcControllerFactory.java
 ---
@@ -26,52 +26,57 @@
 import org.apache.hadoop.hbase.ipc.DelegatingPayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-import org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.phoenix.util.SchemaUtil;
 
 /**
  * {@link RpcControllerFactory} that overrides the standard {@link 
PayloadCarryingRpcController} to
  * allow the configured index tables (via {@link #INDEX_TABLE_NAMES_KEY}) 
to use the Index priority.
  */
-public class IndexQosRpcControllerFactory extends RpcControllerFactory {
+public class PhoenixRpcControllerFactory extends RpcControllerFactory {
 
 public static final String INDEX_TABLE_NAMES_KEY = 
"phoenix.index.rpc.controller.index-tables";
 
-public IndexQosRpcControllerFactory(Configuration conf) {
+public PhoenixRpcControllerFactory(Configuration conf) {
 super(conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController() {
 PayloadCarryingRpcController delegate = super.newController();
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(CellScanner 
cellScanner) {
 PayloadCarryingRpcController delegate = 
super.newController(cellScanner);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(List 
cellIterables) {
 PayloadCarryingRpcController delegate = 
super.newController(cellIterables);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
-private class IndexQosRpcController extends 
DelegatingPayloadCarryingRpcController {
+private class PhoenixRpcController extends 
DelegatingPayloadCarryingRpcController {
 
-private int priority;
+private int indexPriority;
+private int metadataPriority;
 
-public IndexQosRpcController(PayloadCarryingRpcController 
delegate, Configuration conf) {
+public PhoenixRpcController(PayloadCarryingRpcController delegate, 
Configuration conf) {
 super(delegate);
-this.priority = 
PhoenixIndexRpcSchedulerFactory.getMinPriority(conf);
+this.indexPriority = 
PhoenixRpcSchedulerFactory.getIndexMinPriority(conf);
+this.metadataPriority = 
PhoenixRpcSchedulerFactory.getMetadataMinPriority(conf);
 }
 @Override
 public void setPriority(final TableName tn) {
-// if its an index table, then we override to the index 
priority
-if (!tn.isSystemTable() &&  
!SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
-setPriority(this.priority);
+// this is function is called for hbase system tables, phoenix 
system tables and index tables 
+if (SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
--- End diff --

Sure I will move and rename.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380452#comment-14380452
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27151905
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/PhoenixRpcControllerFactory.java
 ---
@@ -26,52 +26,57 @@
 import org.apache.hadoop.hbase.ipc.DelegatingPayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController;
 import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-import org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
 import org.apache.phoenix.util.SchemaUtil;
 
 /**
  * {@link RpcControllerFactory} that overrides the standard {@link 
PayloadCarryingRpcController} to
  * allow the configured index tables (via {@link #INDEX_TABLE_NAMES_KEY}) 
to use the Index priority.
  */
-public class IndexQosRpcControllerFactory extends RpcControllerFactory {
+public class PhoenixRpcControllerFactory extends RpcControllerFactory {
 
 public static final String INDEX_TABLE_NAMES_KEY = 
"phoenix.index.rpc.controller.index-tables";
 
-public IndexQosRpcControllerFactory(Configuration conf) {
+public PhoenixRpcControllerFactory(Configuration conf) {
 super(conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController() {
 PayloadCarryingRpcController delegate = super.newController();
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(CellScanner 
cellScanner) {
 PayloadCarryingRpcController delegate = 
super.newController(cellScanner);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
 @Override
 public PayloadCarryingRpcController newController(List 
cellIterables) {
 PayloadCarryingRpcController delegate = 
super.newController(cellIterables);
-return new IndexQosRpcController(delegate, conf);
+return new PhoenixRpcController(delegate, conf);
 }
 
-private class IndexQosRpcController extends 
DelegatingPayloadCarryingRpcController {
+private class PhoenixRpcController extends 
DelegatingPayloadCarryingRpcController {
 
-private int priority;
+private int indexPriority;
+private int metadataPriority;
 
-public IndexQosRpcController(PayloadCarryingRpcController 
delegate, Configuration conf) {
+public PhoenixRpcController(PayloadCarryingRpcController delegate, 
Configuration conf) {
 super(delegate);
-this.priority = 
PhoenixIndexRpcSchedulerFactory.getMinPriority(conf);
+this.indexPriority = 
PhoenixRpcSchedulerFactory.getIndexMinPriority(conf);
+this.metadataPriority = 
PhoenixRpcSchedulerFactory.getMetadataMinPriority(conf);
 }
 @Override
 public void setPriority(final TableName tn) {
-// if its an index table, then we override to the index 
priority
-if (!tn.isSystemTable() &&  
!SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
-setPriority(this.priority);
+// this is function is called for hbase system tables, phoenix 
system tables and index tables 
+if (SchemaUtil.isSystemDataTable(tn.getNameAsString())) {
--- End diff --

Sure I will move and rename.


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread twdsilva
Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27152399
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

I spoke with @jyates, I added the min/max range in case we want to support 
setting priorities of specific tables in the future, but if this is not 
required I can remove it and just have a single priority. Should I make the 
same change for the index priorities?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380468#comment-14380468
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27152399
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

I spoke with @jyates, I added the min/max range in case we want to support 
setting priorities of specific tables in the future, but if this is not 
required I can remove it and just have a single priority. Should I make the 
same change for the index priorities?


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27152695
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

That'd be my vote. It'd cut down on what you'd need to configure too, 
right? If we have a default value for the priority, you wouldn't even need to 
specify this in hbase-sites.xml. If we need more down the road, we can always 
add it (but I don't think we will).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380475#comment-14380475
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27152695
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

That'd be my vote. It'd cut down on what you'd need to configure too, 
right? If we have a default value for the priority, you wouldn't even need to 
specify this in hbase-sites.xml. If we need more down the road, we can always 
add it (but I don't think we will).


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153366
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

I can see us wanting a reserved range for metadata updates - say 10 
different numbers - so we don't have to worry about conflicting with anything 
else that needs its own priority in the future. Its easy to get into 
situations, especially if everyone is carefully watching these numbers and 
fully know whats going on, to get into situations where you have META = 1, 
SOME_FEATURE=2, OTHER_META=3, etc.

A reserved range is just safer for the future and is likely to never be 
changed in the configuration, unless people already to priority changing, which 
is unlikely given how hard it was for us :). However, having the configuration 
is necessary, regardless of range or a single value, to ensure people can make 
it work with their existing installations (and as proof against HBase invading 
our range space by accident).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153449
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
--- End diff --

Nit: looks like you formatted the header :-/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153514
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
+private static RpcExecutor metadataRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-metataqueue", 30, 1, 300));
+
+/**
+ * Factory that uses a spyed RpcExecutor
+ */
+public static class TestPhoenixIndexRpcSchedulerFactory extends 
PhoenixRpcSchedulerFactory {
+@Override
+public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
+   

[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380489#comment-14380489
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153366
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

I can see us wanting a reserved range for metadata updates - say 10 
different numbers - so we don't have to worry about conflicting with anything 
else that needs its own priority in the future. Its easy to get into 
situations, especially if everyone is carefully watching these numbers and 
fully know whats going on, to get into situations where you have META = 1, 
SOME_FEATURE=2, OTHER_META=3, etc.

A reserved range is just safer for the future and is likely to never be 
changed in the configuration, unless people already to priority changing, which 
is unlikely given how hard it was for us :). However, having the configuration 
is necessary, regardless of range or a single value, to ensure people can make 
it work with their existing installations (and as proof against HBase invading 
our range space by accident).


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153585
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
+private static RpcExecutor metadataRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-metataqueue", 30, 1, 300));
+
+/**
+ * Factory that uses a spyed RpcExecutor
+ */
+public static class TestPhoenixIndexRpcSchedulerFactory extends 
PhoenixRpcSchedulerFactory {
+@Override
+public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
+   

[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380492#comment-14380492
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153449
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
--- End diff --

Nit: looks like you formatted the header :-/


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380497#comment-14380497
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153585
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
+private static RpcExecutor metadataRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-metataqueue", 30, 1, 300));
+
+/**
+ * Factory that uses

[jira] [Commented] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380499#comment-14380499
 ] 

Gabriel Reid commented on PHOENIX-1770:
---

Yes, returning on first non-zero exit status is what it should do. 

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380494#comment-14380494
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153514
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
+private static RpcExecutor metadataRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-metataqueue", 30, 1, 300));
+
+/**
+ * Factory that uses

[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153823
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
--- End diff --

Does the spys need to be reset for mockito?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153852
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
--- End diff --

This ensures it runs the methods single-threaded, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380509#comment-14380509
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153852
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
--- End diff --

This ensures it runs the methods single-threaded, right?


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380508#comment-14380508
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27153823
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
--- End diff --

Does the spys need to be reset for mockito?


> Use high priority queue for metadata endpoint calls
> ---
>
>

[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154098
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java 
---
@@ -40,28 +40,34 @@
 private static final int DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER = 10;
 
 private RpcScheduler delegate;
-private int minPriority;
-private int maxPriority;
-private RpcExecutor callExecutor;
+private int indexPriority;
+private int metadataPriority;
+private RpcExecutor indexCallExecutor;
+private RpcExecutor metadataCallExecutor;
 private int port;
 
-public PhoenixIndexRpcScheduler(int indexHandlerCount, Configuration 
conf,
-RpcScheduler delegate, int minPriority, int maxPriority) {
+public PhoenixRpcScheduler(int indexHandlerCount, int 
metadataHandlerCount, Configuration conf,
--- End diff --

Can we do layered handlers instead of clumping them all into one? HBase did 
this with its meta, regular, and replication queues and it is terribly ugly.

Instead, maybe have this class instantiate the necessary handler and then 
chain through them until one of the handlers range (min/max priority) applies 
to the incoming message and it then passes it onto the queue. You could also do 
some checking to ensure the ranges don't overlap, etc.

At the very least, this will get rid of the layers of if/else below.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380512#comment-14380512
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154098
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java 
---
@@ -40,28 +40,34 @@
 private static final int DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER = 10;
 
 private RpcScheduler delegate;
-private int minPriority;
-private int maxPriority;
-private RpcExecutor callExecutor;
+private int indexPriority;
+private int metadataPriority;
+private RpcExecutor indexCallExecutor;
+private RpcExecutor metadataCallExecutor;
 private int port;
 
-public PhoenixIndexRpcScheduler(int indexHandlerCount, Configuration 
conf,
-RpcScheduler delegate, int minPriority, int maxPriority) {
+public PhoenixRpcScheduler(int indexHandlerCount, int 
metadataHandlerCount, Configuration conf,
--- End diff --

Can we do layered handlers instead of clumping them all into one? HBase did 
this with its meta, regular, and replication queues and it is terribly ugly.

Instead, maybe have this class instantiate the necessary handler and then 
chain through them until one of the handlers range (min/max priority) applies 
to the incoming message and it then passes it onto the queue. You could also do 
some checking to ensure the ranges don't overlap, etc.

At the very least, this will get rid of the layers of if/else below.


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154220
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+//validate index and metadata priorities do not overlap
+Preconditions.checkArgument(doesNotOverlap(indexMinPriority, 
indexMaxPriority, metadataMinPriority, metadataMaxPriority), "Priority ranges ("
++ indexMinPriority + ",  " + indexMaxPriority + ") and  (" 
+ metadataMinPriority + ", " + metadataMaxPriority
--- End diff --

super nit: Might be cleaner if you just use a format string here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380515#comment-14380515
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154220
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+//validate index and metadata priorities do not overlap
+Preconditions.checkArgument(doesNotOverlap(indexMinPriority, 
indexMaxPriority, metadataMinPriority, metadataMaxPriority), "Priority ranges ("
++ indexMinPriority + ",  " + indexMaxPriority + ") and  (" 
+ metadataMinPriority + ", " + metadataMaxPriority
--- End diff --

super nit: Might be cleaner if you just use a format string here


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154463
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java 
---
@@ -40,28 +40,34 @@
 private static final int DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER = 10;
 
 private RpcScheduler delegate;
-private int minPriority;
-private int maxPriority;
-private RpcExecutor callExecutor;
+private int indexPriority;
+private int metadataPriority;
+private RpcExecutor indexCallExecutor;
+private RpcExecutor metadataCallExecutor;
 private int port;
 
-public PhoenixIndexRpcScheduler(int indexHandlerCount, Configuration 
conf,
-RpcScheduler delegate, int minPriority, int maxPriority) {
+public PhoenixRpcScheduler(int indexHandlerCount, int 
metadataHandlerCount, Configuration conf,
--- End diff --

Would we need separate entries for each RpcScheduler in hbase-sites.xml 
then?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380522#comment-14380522
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154463
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java 
---
@@ -40,28 +40,34 @@
 private static final int DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER = 10;
 
 private RpcScheduler delegate;
-private int minPriority;
-private int maxPriority;
-private RpcExecutor callExecutor;
+private int indexPriority;
+private int metadataPriority;
+private RpcExecutor indexCallExecutor;
+private RpcExecutor metadataCallExecutor;
 private int port;
 
-public PhoenixIndexRpcScheduler(int indexHandlerCount, Configuration 
conf,
-RpcScheduler delegate, int minPriority, int maxPriority) {
+public PhoenixRpcScheduler(int indexHandlerCount, int 
metadataHandlerCount, Configuration conf,
--- End diff --

Would we need separate entries for each RpcScheduler in hbase-sites.xml 
then?


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread jyates
Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154898
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java 
---
@@ -40,28 +40,34 @@
 private static final int DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER = 10;
 
 private RpcScheduler delegate;
-private int minPriority;
-private int maxPriority;
-private RpcExecutor callExecutor;
+private int indexPriority;
+private int metadataPriority;
+private RpcExecutor indexCallExecutor;
+private RpcExecutor metadataCallExecutor;
 private int port;
 
-public PhoenixIndexRpcScheduler(int indexHandlerCount, Configuration 
conf,
-RpcScheduler delegate, int minPriority, int maxPriority) {
+public PhoenixRpcScheduler(int indexHandlerCount, int 
metadataHandlerCount, Configuration conf,
--- End diff --

No, we can have standard ones and then the ability to overwrite them in 
configuration. Each scheduler would likely need its own limits config though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380539#comment-14380539
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user jyates commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27154898
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java 
---
@@ -40,28 +40,34 @@
 private static final int DEFAULT_MAX_CALLQUEUE_LENGTH_PER_HANDLER = 10;
 
 private RpcScheduler delegate;
-private int minPriority;
-private int maxPriority;
-private RpcExecutor callExecutor;
+private int indexPriority;
+private int metadataPriority;
+private RpcExecutor indexCallExecutor;
+private RpcExecutor metadataCallExecutor;
 private int port;
 
-public PhoenixIndexRpcScheduler(int indexHandlerCount, Configuration 
conf,
-RpcScheduler delegate, int minPriority, int maxPriority) {
+public PhoenixRpcScheduler(int indexHandlerCount, int 
metadataHandlerCount, Configuration conf,
--- End diff --

No, we can have standard ones and then the ability to overwrite them in 
configuration. Each scheduler would likely need its own limits config though.


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1756) Add Month() and Second() buildin functions

2015-03-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380541#comment-14380541
 ] 

James Taylor commented on PHOENIX-1756:
---

+1. [~rajeshbabu] - would you mind committing on behalf of [~ayingshu]? I like 
the idea of WEEK(date), Minute(), Hour(), DAYOFMONTH(). Nice work, Alicia. 
Thanks for the contributions.

> Add Month() and Second() buildin functions
> --
>
> Key: PHOENIX-1756
> URL: https://issues.apache.org/jira/browse/PHOENIX-1756
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: Phoenix-1756-v1.patch, Phoenix-1756-v2.patch, 
> Phoenix-1756.patch
>
>
> From Oracle doc: Month(date) and Second(date). Very similar to Year(date) 
> buildin. 
> MONTH(date)  An integer from 1 to 12 representing the month 
>  component of date
> SECOND(time) An integer from 0 to 59 representing the second 
>  component of time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Mark Tse (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Tse updated PHOENIX-1770:
--
Attachment: PHOENIX-1770.patch

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch, PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Mark Tse (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Tse updated PHOENIX-1770:
--
Attachment: (was: PHOENIX-1770.patch)

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread twdsilva
Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27155154
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
+private static RpcExecutor metadataRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-metataqueue", 30, 1, 300));
+
+/**
+ * Factory that uses a spyed RpcExecutor
+ */
+public static class TestPhoenixIndexRpcSchedulerFactory extends 
PhoenixRpcSchedulerFactory {
+@Override
+public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
+ 

[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380545#comment-14380545
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27155154
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PhoenixRpcIT.java ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may 
not use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License for the specific language
+ * governing permissions and limitations under the License.
+ */
+package org.apache.phoenix.end2end.index;
+
+import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
+import static 
org.apache.phoenix.util.PhoenixRuntime.PHOENIX_TEST_DRIVER_URL_PARAM;
+import static org.apache.phoenix.util.TestUtil.LOCALHOST;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.ipc.BalancedQueueRpcExecutor;
+import org.apache.hadoop.hbase.ipc.CallRunner;
+import org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.ipc.RpcExecutor;
+import org.apache.hadoop.hbase.ipc.RpcScheduler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
+import org.apache.phoenix.hbase.index.PhoenixRpcControllerFactory;
+import org.apache.phoenix.hbase.index.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.phoenix.jdbc.PhoenixTestDriver;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class PhoenixRpcIT extends BaseTest {
+
+private static final String SCHEMA_NAME = "S";
+private static final String INDEX_TABLE_NAME = "I";
+private static final String DATA_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "T");
+private static final String INDEX_TABLE_FULL_NAME = 
SchemaUtil.getTableName(SCHEMA_NAME, "I");
+private static final int NUM_SLAVES = 2;
+
+private static String url;
+private static PhoenixTestDriver driver;
+private HBaseTestingUtility util;
+private HBaseAdmin admin;
+private Configuration conf;
+private static RpcExecutor indexRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-index-queue", 30, 1, 300));
+private static RpcExecutor metadataRpcExecutor = Mockito.spy(new 
BalancedQueueRpcExecutor("test-metataqueue", 30, 1, 300));
+
+/**
+ * Factory that us

[jira] [Updated] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Mark Tse (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Tse updated PHOENIX-1770:
--
Attachment: PHOENIX-1770.patch

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Mark Tse (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Tse updated PHOENIX-1770:
--
Attachment: (was: PHOENIX-1770.patch)

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1770) psql.py returns 0 even if an error has occurred

2015-03-25 Thread Mark Tse (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380562#comment-14380562
 ] 

Mark Tse commented on PHOENIX-1770:
---

Done. I decided to leave the file clean-up for performance.py at the end so 
that the files can be looked at on error.

> psql.py returns 0 even if an error has occurred
> ---
>
> Key: PHOENIX-1770
> URL: https://issues.apache.org/jira/browse/PHOENIX-1770
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.3
>Reporter: Mark Tse
> Fix For: 4.3.1
>
> Attachments: PHOENIX-1770.patch
>
>
> psql.py should exit with the return code provided by 
> `subprocess.check_call(java_cmd, shell=True)`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1457 Use high priority queue for met...

2015-03-25 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27158033
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

The range is implied and documented by having a config for PHOENIX_META 
priority and config for PHOENIX_INDEX priority.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1457) Use high priority queue for metadata endpoint calls

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380609#comment-14380609
 ] 

ASF GitHub Bot commented on PHOENIX-1457:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/55#discussion_r27158033
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixRpcSchedulerFactory.java
 ---
@@ -62,30 +62,59 @@ public RpcScheduler create(Configuration conf, 
RegionServerServices services) {
 return delegate;
 }
 
+// get the index priority configs
 int indexHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_HANDLER_COUNT);
-int minPriority = getMinPriority(conf);
-int maxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
-// make sure the ranges are outside the warning ranges
-Preconditions.checkArgument(maxPriority > minPriority, "Max index 
priority (" + maxPriority
-+ ") must be larger than min priority (" + minPriority + 
")");
-boolean allSmaller =
-minPriority < HConstants.REPLICATION_QOS
-&& maxPriority < HConstants.REPLICATION_QOS;
-boolean allLarger = minPriority > HConstants.HIGH_QOS;
-Preconditions.checkArgument(allSmaller || allLarger, "Index 
priority range (" + minPriority
-+ ",  " + maxPriority + ") must be outside HBase priority 
range ("
-+ HConstants.REPLICATION_QOS + ", " + HConstants.HIGH_QOS 
+ ")");
+int indexMinPriority = getIndexMinPriority(conf);
+int indexMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_INDEX_MAX_PRIORITY);
+validatePriority(indexMinPriority, indexMaxPriority);
+
+// get the metadata priority configs
+int metadataHandlerCount = 
conf.getInt(QueryServices.INDEX_HANDLER_COUNT_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_HANDLER_COUNT);
+int metadataMinPriority = getMetadataMinPriority(conf);
+int metadataMaxPriority = 
conf.getInt(QueryServices.MAX_INDEX_PRIOIRTY_ATTRIB, 
QueryServicesOptions.DEFAULT_METADATA_MAX_PRIORITY);
--- End diff --

The range is implied and documented by having a config for PHOENIX_META 
priority and config for PHOENIX_INDEX priority.


> Use high priority queue for metadata endpoint calls
> ---
>
> Key: PHOENIX-1457
> URL: https://issues.apache.org/jira/browse/PHOENIX-1457
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: 4.3.1
>
> If the RS hosting the system table gets swamped, then we'd be bottlenecked 
> waiting for the response back before running a query when we check if the 
> metadata is in sync. We should run endpoint coprocessor calls for 
> MetaDataService at a high priority to avoid that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-385) Support negative literal directly

2015-03-25 Thread Dave Hacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Hacker updated PHOENIX-385:

Attachment: Phoenix-358.patch

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
> Attachments: Phoenix-358.patch
>
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1756) Add Month() and Second() buildin functions

2015-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380775#comment-14380775
 ] 

Hudson commented on PHOENIX-1756:
-

FAILURE: Integrated in Phoenix-master #636 (See 
[https://builds.apache.org/job/Phoenix-master/636/])
PHOENIX-1756 Add Month() and Second() buildin functions(Alicia Ying Shu) 
(rajeshbabu: rev ad9248ee54d386bd6759d0ff6c71d9ed70de8693)
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SecondFunction.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/MonthFunction.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/YearMonthSecondFunctionIT.java
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java


> Add Month() and Second() buildin functions
> --
>
> Key: PHOENIX-1756
> URL: https://issues.apache.org/jira/browse/PHOENIX-1756
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1756-v1.patch, Phoenix-1756-v2.patch, 
> Phoenix-1756.patch
>
>
> From Oracle doc: Month(date) and Second(date). Very similar to Year(date) 
> buildin. 
> MONTH(date)  An integer from 1 to 12 representing the month 
>  component of date
> SECOND(time) An integer from 0 to 59 representing the second 
>  component of time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Last call for 4.3.1 JIRAs

2015-03-25 Thread Samarth Jain
We are planning on cutting release candidate for 4.3.1 by tomorrow morning.
As of now we have the following JIRAs targeted for 4.3.1 that are in
progress:

https://issues.apache.org/jira/browse/PHOENIX-1457
https://issues.apache.org/jira/browse/PHOENIX-1770

Let us know if there is any other work that you would like to be included
on 4.3.1.

Thanks,
Samarth (RM for 4.3.1)


[jira] [Updated] (PHOENIX-385) Support negative literal directly

2015-03-25 Thread Dave Hacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Hacker updated PHOENIX-385:

Attachment: (was: Phoenix-358.patch)

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1776) The literal -1.0 (floating point) should not be converted to -1 (Integer)

2015-03-25 Thread Dave Hacker (JIRA)
Dave Hacker created PHOENIX-1776:


 Summary: The literal -1.0 (floating point) should not be converted 
to -1 (Integer)
 Key: PHOENIX-1776
 URL: https://issues.apache.org/jira/browse/PHOENIX-1776
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dave Hacker
Assignee: Dave Hacker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1776) The literal -1.0 (floating point) should not be converted to -1 (Integer)

2015-03-25 Thread Dave Hacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Hacker updated PHOENIX-1776:
-
Attachment: Phoenix-1776.patch

> The literal -1.0 (floating point) should not be converted to -1 (Integer)
> -
>
> Key: PHOENIX-1776
> URL: https://issues.apache.org/jira/browse/PHOENIX-1776
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dave Hacker
>Assignee: Dave Hacker
> Attachments: Phoenix-1776.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1776) The literal -1.0 (floating point) should not be converted to -1 (Integer)

2015-03-25 Thread Dave Hacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Hacker updated PHOENIX-1776:
-
Description: 
CREATE TABLE test (
id VARCHAR not null primary key,
name VARCHAR,
lat FLOAT
);

UPSERT INTO test(id,name,lat) VALUES ('testid', 'testname', -1.00);

Error: ERROR 203 (22005): Type mismatch. FLOAT and BIGINT for -1

> The literal -1.0 (floating point) should not be converted to -1 (Integer)
> -
>
> Key: PHOENIX-1776
> URL: https://issues.apache.org/jira/browse/PHOENIX-1776
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dave Hacker
>Assignee: Dave Hacker
> Attachments: Phoenix-1776.patch
>
>
> CREATE TABLE test (
> id VARCHAR not null primary key,
> name VARCHAR,
> lat FLOAT
> );
> UPSERT INTO test(id,name,lat) VALUES ('testid', 'testname', -1.00);
> Error: ERROR 203 (22005): Type mismatch. FLOAT and BIGINT for -1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1776) The literal -1.0 (floating point) should not be converted to -1 (Integer)

2015-03-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380863#comment-14380863
 ] 

James Taylor commented on PHOENIX-1776:
---

+1 

> The literal -1.0 (floating point) should not be converted to -1 (Integer)
> -
>
> Key: PHOENIX-1776
> URL: https://issues.apache.org/jira/browse/PHOENIX-1776
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dave Hacker
>Assignee: Dave Hacker
> Attachments: Phoenix-1776.patch
>
>
> CREATE TABLE test (
> id VARCHAR not null primary key,
> name VARCHAR,
> lat FLOAT
> );
> UPSERT INTO test(id,name,lat) VALUES ('testid', 'testname', -1.00);
> Error: ERROR 203 (22005): Type mismatch. FLOAT and BIGINT for -1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1777) Allow adding built in functions in patch releases

2015-03-25 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-1777:
-

 Summary: Allow adding built in functions in patch releases
 Key: PHOENIX-1777
 URL: https://issues.apache.org/jira/browse/PHOENIX-1777
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


Currently we don't allow adding built in functions to patch releases because a 
newer client (containing the new built in function) connecting to older server 
results in an ArrayIndexOutOfBoundException. The right way to handle this would 
be to instead throw a SQLException of a RuntimeException wrapping a 
SQLException with the exception code set as SQLExceptionCode.UNKNOWN_FUNCTION. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1778) Change pom and MetaDataProtocol version to 4.4.0 in master

2015-03-25 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1778:
-

 Summary: Change pom and MetaDataProtocol version to 4.4.0 in master
 Key: PHOENIX-1778
 URL: https://issues.apache.org/jira/browse/PHOENIX-1778
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1778) Change pom and MetaDataProtocol version to 4.4.0 in master

2015-03-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1778:
--
Attachment: PHOENIX-1778.patch

> Change pom and MetaDataProtocol version to 4.4.0 in master
> --
>
> Key: PHOENIX-1778
> URL: https://issues.apache.org/jira/browse/PHOENIX-1778
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-1778.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1778) Change pom and MetaDataProtocol version to 4.4.0 in master

2015-03-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1778.
---
Resolution: Fixed

> Change pom and MetaDataProtocol version to 4.4.0 in master
> --
>
> Key: PHOENIX-1778
> URL: https://issues.apache.org/jira/browse/PHOENIX-1778
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-1778.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [IMPORTANT] Some changes to branches and releases for 4.4+

2015-03-25 Thread James Taylor
Sounds like a plan. I changed the pom and MetaDataProtocol version to
4.4.0 on master, so we're all set there.

If you could look at why our master build is failing, that'd be good.
Looks like the 1.0.1-SNAPSHOT build is messed up. We're getting this
exception for every unit test:

Caused by: java.lang.NoSuchFieldError: NO_NEXT_INDEXED_KEY
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:590)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:267)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:181)
at 
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)

On Wed, Mar 25, 2015 at 11:30 AM, Enis Söztutar  wrote:
> Ok great. I'll continue with the plan then. I'll send an update again for
> devs to notify about the end state as not every one might be following
> closely.
>
> Enis
>
> On Tue, Mar 24, 2015 at 11:13 PM, James Taylor 
> wrote:
>
>> We can actually just set the pom version and version in MetaDataProtocol
>> to 4.4.0 now if we want.
>>
>>
>> On Tuesday, March 24, 2015, James Taylor  wrote:
>>
>>> True, good point. We can revert those right after we branch in prep for a
>>> 4.4 release on 1.0.
>>>
>>> On Tuesday, March 24, 2015, Enis Söztutar  wrote:
>>>

 On Tue, Mar 24, 2015 at 11:02 PM, James Taylor 
 wrote:

> The master branch already includes PHOENIX-1642, so we just keep it
> there.  No need to revert anything or cherry-pick anything. Every
> commit being done to 4.x-HBase-1.x is being done for master (that's
> why it's just wasted overhead until it's needed).
>

 Like the pom.xml version, and
 https://issues.apache.org/jira/browse/PHOENIX-1766 have to be reverted
 if we fork the 4.x-HBase-1.0 branch from master, no?


>
> Your plan sounds fine, except this step isn't necessary (and no revert
> of anything currently in master is necessary):
>  - After we fork 4.x-HBase-1.0, we cherry-pick PHOENIX-1642.
>
> Thanks,
> James
>
> On Tue, Mar 24, 2015 at 10:53 PM, Enis Söztutar 
> wrote:
> > On Tue, Mar 24, 2015 at 5:09 PM, James Taylor  >
> > wrote:
> >
> >> I'm fine with a 4.4 release for HBase 1.0, but it depends on demand -
> >> do our users need this?
> >
> >
> > I think so. HBase-1.0 usage is picking up, and we already saw users
> asking
> > for it. Though as usual, everything depends on whether there is enough
> > bandwidth to do the actual work (in terms of release, testing,
> porting,
> > etc).
> >
> >
> >> I think doing that out of master will work and
> >> we can create a branch for the release like we do with our other
> >> releases.
> >>
> >> When sub tasks of PHOENIX-1501 are ready, I think we'd want to put
> >> those in master and prior to that we'll need to create a
> >> 4.x-HBase-1.0. So we'll save the overhead of maintaining duplicate
> >> branches until that point.
> >>
> >> Make sense?
> >>
> >
> > Forking 4.4 release for HBase-1.0 from master seems strange. We have
> to
> > revert back the version, and make sure that it is really identical to
> the
> > 4.x-HBase-0.98 branch except for PHOENIX-1642. However, if you think
> that
> > an extra branch is really a lot overhead, maybe we can do this:
> >  - Delete 4.x-HBase-1.x now.
> >  - Keep 4.x-HBase-0.98 and master branches.
> >  - Fork 4.x-HBase-1.0 branch when whichever of these happens earlier:
> > -- 4.4 is about to be released
> > -- PHOENIX-1501, or PHOENIX-1681 or PHOENIX-1763 needs to be
> committed.
> >  - After we fork 4.x-HBase-1.0, we cherry-pick PHOENIX-1642.
> >  - When PHOENIX-1501/PHOENIX-1681 and PHOENIX-1763 is ready and
> HBase-1.1.0
> > is released we can fork 1.1 branch.
> >
> > Will that work? I am up to doing the work as long as we have a plan.
> >
> > Enis
> >
> >
> >>
> >> On Tue, Mar 24, 2015 at 4:50 PM, Enis Söztutar 
> wrote:
> >> >>
> >> >> We've been putting stuff on feature branches that need more time.
> When
> >> >> PHOENIX-1681 or other sub tasks of PHOENIX-1501 are ready (after
> >> >> HBASE-12972 is in), we'll need a branch specific to HBase 1.1.
> Until
> >> >> then, I think it's just unneeded overhead.
> >> >
> >> >
> >> > That is why the branch for 1.1 is not created yet. The current
> branch
> >> > 4.x-HBase-1.x supports ONLY HBase-1.0 release, not 1.1 release. I
> had
> >> named
> >> > the branch 1.x hoping that it will support both, but it seem that
> we
> >> cannot
> >> > do this. Should we rename the branch to 4.x-HBase-1.0 so 

[jira] [Commented] (PHOENIX-1778) Change pom and MetaDataProtocol version to 4.4.0 in master

2015-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380981#comment-14380981
 ] 

Hudson commented on PHOENIX-1778:
-

FAILURE: Integrated in Phoenix-master #637 (See 
[https://builds.apache.org/job/Phoenix-master/637/])
PHOENIX-1778 Change pom and MetaDataProtocol version to 4.4.0 in master 
(jamestaylor: rev d70f389ed65315f9b27e2a6f971a667ab4c447ae)
* phoenix-pherf/pom.xml
* phoenix-flume/pom.xml
* phoenix-assembly/pom.xml
* phoenix-core/pom.xml
* pom.xml
* phoenix-pig/pom.xml
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java


> Change pom and MetaDataProtocol version to 4.4.0 in master
> --
>
> Key: PHOENIX-1778
> URL: https://issues.apache.org/jira/browse/PHOENIX-1778
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-1778.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-03-25 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378940#comment-14378940
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 3/25/15 11:40 PM:


Modified the grammar as pointed out. I moved ORDER BY and LIMIT to 
hinted_select_node. However, hinted_select_node has to refer to 
hinted_set_select_node, but not through using select_node. There are many more 
tests failing than before I made the modifications. Parameter count is one 
thing. Checking aggregate and sequence are other issues related to context 
which could be wrong too. This can potentially un-stablize Phoenix. The grammar 
change is not critical for UNION ALL. I would request use Jira-1758 to track 
the grammar changes when we have better thought of the consequences of grammar 
changes and what it really gains. Thanks. [~jamestaylor] [~maryannxue]


was (Author: aliciashu):
Modified the grammar as pointed out. Fixing existing tests.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1731) Add getNextIndexedKey() to IndexHalfStoreFileReader and FilteredKeyValueScanner

2015-03-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381157#comment-14381157
 ] 

Enis Soztutar commented on PHOENIX-1731:


This seems committed. [~lhofhansl] do you mind resolving? 

> Add getNextIndexedKey() to IndexHalfStoreFileReader and 
> FilteredKeyValueScanner
> ---
>
> Key: PHOENIX-1731
> URL: https://issues.apache.org/jira/browse/PHOENIX-1731
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.2.3, 4.3.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Attachments: 1731.txt
>
>
> See HBASE-13109, which changes two private interfaces, which breaks Phoenix 
> compilation, which uses these interfaces.
> On the jira we decided not to remove those changes (that's why they're marked 
> private).
> But, we can easily fix this in Phoenix by adding these methods to the classes 
> that cause the problem, being care not to add the override annotation. The 
> method can safely return null, in which case the optimization will not be 
> used when these classes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [IMPORTANT] Some changes to branches and releases for 4.4+

2015-03-25 Thread Enis Söztutar
Seems related to  https://issues.apache.org/jira/browse/PHOENIX-1731.
1.0.1-SNAPSHOT did not contain
https://issues.apache.org/jira/browse/HBASE-13109. The current one does,
which requires another change since the method signature is different
between 0.98 patch and 1.0 patch.

I have pushed an addendum patch. Let's see whether it helps.

Enis

On Wed, Mar 25, 2015 at 3:52 PM, James Taylor 
wrote:

> Sounds like a plan. I changed the pom and MetaDataProtocol version to
> 4.4.0 on master, so we're all set there.
>
> If you could look at why our master build is failing, that'd be good.
> Looks like the 1.0.1-SNAPSHOT build is messed up. We're getting this
> exception for every unit test:
>
> Caused by: java.lang.NoSuchFieldError: NO_NEXT_INDEXED_KEY
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:590)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:267)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:181)
> at
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
>
> On Wed, Mar 25, 2015 at 11:30 AM, Enis Söztutar  wrote:
> > Ok great. I'll continue with the plan then. I'll send an update again for
> > devs to notify about the end state as not every one might be following
> > closely.
> >
> > Enis
> >
> > On Tue, Mar 24, 2015 at 11:13 PM, James Taylor 
> > wrote:
> >
> >> We can actually just set the pom version and version in MetaDataProtocol
> >> to 4.4.0 now if we want.
> >>
> >>
> >> On Tuesday, March 24, 2015, James Taylor 
> wrote:
> >>
> >>> True, good point. We can revert those right after we branch in prep
> for a
> >>> 4.4 release on 1.0.
> >>>
> >>> On Tuesday, March 24, 2015, Enis Söztutar  wrote:
> >>>
> 
>  On Tue, Mar 24, 2015 at 11:02 PM, James Taylor <
> jamestay...@apache.org>
>  wrote:
> 
> > The master branch already includes PHOENIX-1642, so we just keep it
> > there.  No need to revert anything or cherry-pick anything. Every
> > commit being done to 4.x-HBase-1.x is being done for master (that's
> > why it's just wasted overhead until it's needed).
> >
> 
>  Like the pom.xml version, and
>  https://issues.apache.org/jira/browse/PHOENIX-1766 have to be
> reverted
>  if we fork the 4.x-HBase-1.0 branch from master, no?
> 
> 
> >
> > Your plan sounds fine, except this step isn't necessary (and no
> revert
> > of anything currently in master is necessary):
> >  - After we fork 4.x-HBase-1.0, we cherry-pick PHOENIX-1642.
> >
> > Thanks,
> > James
> >
> > On Tue, Mar 24, 2015 at 10:53 PM, Enis Söztutar 
> > wrote:
> > > On Tue, Mar 24, 2015 at 5:09 PM, James Taylor <
> jamestay...@apache.org
> > >
> > > wrote:
> > >
> > >> I'm fine with a 4.4 release for HBase 1.0, but it depends on
> demand -
> > >> do our users need this?
> > >
> > >
> > > I think so. HBase-1.0 usage is picking up, and we already saw users
> > asking
> > > for it. Though as usual, everything depends on whether there is
> enough
> > > bandwidth to do the actual work (in terms of release, testing,
> > porting,
> > > etc).
> > >
> > >
> > >> I think doing that out of master will work and
> > >> we can create a branch for the release like we do with our other
> > >> releases.
> > >>
> > >> When sub tasks of PHOENIX-1501 are ready, I think we'd want to put
> > >> those in master and prior to that we'll need to create a
> > >> 4.x-HBase-1.0. So we'll save the overhead of maintaining duplicate
> > >> branches until that point.
> > >>
> > >> Make sense?
> > >>
> > >
> > > Forking 4.4 release for HBase-1.0 from master seems strange. We
> have
> > to
> > > revert back the version, and make sure that it is really identical
> to
> > the
> > > 4.x-HBase-0.98 branch except for PHOENIX-1642. However, if you
> think
> > that
> > > an extra branch is really a lot overhead, maybe we can do this:
> > >  - Delete 4.x-HBase-1.x now.
> > >  - Keep 4.x-HBase-0.98 and master branches.
> > >  - Fork 4.x-HBase-1.0 branch when whichever of these happens
> earlier:
> > > -- 4.4 is about to be released
> > > -- PHOENIX-1501, or PHOENIX-1681 or PHOENIX-1763 needs to be
> > committed.
> > >  - After we fork 4.x-HBase-1.0, we cherry-pick PHOENIX-1642.
> > >  - When PHOENIX-1501/PHOENIX-1681 and PHOENIX-1763 is ready and
> > HBase-1.1.0
> > > is released we can fork 1.1 branch.
> > >
> > > Will that work? I am up to doing the work as long as we have a
> plan.
> > >
> > > Enis
> > >
> > >
> > >>
> > >> On Tue, Mar 24, 2015 at 4:50 PM, Enis Söztutar 
> >

[jira] [Commented] (PHOENIX-1642) Make Phoenix Master Branch pointing to HBase1.0.0

2015-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381181#comment-14381181
 ] 

Hudson commented on PHOENIX-1642:
-

ABORTED: Integrated in Phoenix-master #638 (See 
[https://builds.apache.org/job/Phoenix-master/638/])
PHOENIX-1642 Make Phoenix Master Branch pointing to HBase1.0.0 - ADDENDUM for 
HBASE-13109 (enis: rev ad2ad0cefd5d19a9bc8434555a9ecbb55c78)
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/FilteredKeyValueScanner.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java


> Make Phoenix Master Branch pointing to HBase1.0.0
> -
>
> Key: PHOENIX-1642
> URL: https://issues.apache.org/jira/browse/PHOENIX-1642
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeffrey Zhong
>Assignee: Devaraj Das
> Fix For: 5.0.0, 4.4.0
>
> Attachments: 1642-1.txt, 1642-2.txt, 1642-toRemove.patch, 
> 1642-toRemove2.txt, PHOENIX-1642.patch, phoenix-1642_v3.patch
>
>
> As HBase1.0.0 will soon be released, the JIRA is to point Phoenix master 
> branch to HBase1.0.0 release. Once we reach consensus,  we could also port 
> the changes into Phoenix 4.0 branch as well which can be done in a separate 
> JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1661) Implement built-in functions for JSON

2015-03-25 Thread Saloni Udani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381428#comment-14381428
 ] 

Saloni Udani commented on PHOENIX-1661:
---

Hello James
Myself Saloni Udani, final year student of Computer Engineering. I would like 
to contribute in this project .
I have prior experience of working with java and BigData projects. 
I was analyzing requirements of this project and wanted to ask whether 
implementation of native json datatype in Phoenix is also a part of this 
project ?

> Implement built-in functions for JSON
> -
>
> Key: PHOENIX-1661
> URL: https://issues.apache.org/jira/browse/PHOENIX-1661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: JSON, Java, SQL, gsoc2015, mentor
> Attachments: PhoenixJSONSpecification-First-Draft.pdf
>
>
> Take a look at the JSON built-in functions that are implemented in Postgres 
> (http://www.postgresql.org/docs/9.3/static/functions-json.html) and implement 
> the same for Phoenix in Java following this guide: 
> http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html
> Examples of functions include ARRAY_TO_JSON, ROW_TO_JSON, TO_JSON, etc. The 
> implementation of these built-in functions will be impacted by how JSON is 
> stored in Phoenix. See PHOENIX-628. An initial implementation could work off 
> of a simple text-based JSON representation and then when a native JSON type 
> is implemented, they could be reworked to be more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)