Re: Thinking of RC on Thursday

2015-04-21 Thread rajeshb...@apache.org
That's really great work James. Thanks for pointing.

On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
wrote:

> Good list, Rajeshbabu. Thanks for starting the RC process. One more of
> note that's already in:
>
> - 7.5x performance improvement for non aggregate, unordered queries
> (PHOENIX-1779).
>
> Thanks,
> James
>
> On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
>  wrote:
> > That's good to have Eli. I have marked 4.4.0 as fix version for the JIRA.
> >
> > Thanks,
> > Rajeshbabu.
> >
> > On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine  wrote:
> >
> >> Rajesh, I'm harboring hopes of getting PHOENIX-900 completed by
> Thursday.
> >> Hopefully it'll end up in 4.4. I'll keep you posted.
> >>
> >> Thanks
> >>
> >> Eli
> >>
> >> On Mon, Apr 20, 2015 at 1:42 PM, rajeshb...@apache.org <
> >> chrajeshbab...@gmail.com> wrote:
> >>
> >> > I'd like to propose we can have 4.4.0 RC on Thursday.
> >> > We have got a lot of great stuff in 4.4.0 already:
> >> > - 60 bug fixed(which includes fixes from 4.3.1)
> >> > - Spart integration
> >> > - Query server
> >> > - Union All support
> >> > - Pherf - load tester measures throughput
> >> > - Many math and date/time buit-in functions
> >> > - MR job to populate indexes
> >> > - Support for 1.0.x (create new 4.4.0 branch for this)
> >> >
> >> > - PHOENIX-538 Support UDFs JIRA is very close.
> >> >
> >> > Is there any others that we should try to get in?
> >> >
> >> > Thanks,
> >> > Rajeshbabu.
> >> >
> >>
>


It's time to create branch for 1.0.x and/or 1.1.x

2015-04-21 Thread rajeshb...@apache.org
Hi Team,

I think this is better time to create 4.4 branch to work with 1.0.x.

Also HBase 1.1 also very close to release we can have 4.4 branch to work
with it.
I expect very minimal/no changes required for it.
So that we can release 4.4 with all three major versions of HBase.


What do you think?


Thanks,
Rajeshbabu.


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504773#comment-14504773
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi,

We can also use Zipkin to visualize tracing data[1]. HTrace  provides 
ZipkinSpanReceiver which converts spans to Zipkin span format and send them to 
Zipkin server[2]. Therefore HBase,  on top of which Phoenix runs, uses HTrace 
and also Zipkin. What are the ideas for using Zipkin distributed tracing system 
for visualizing Phoenix tracing information? 

[1] https://issues.apache.org/jira/browse/PHOENIX-1119
[2] http://people.apache.org/~ndimiduk/site/book/tracing.html

Thanks.

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1118:
--
Attachment: MockUp4-FlameGraph.png
MockUp3-PatternDetector.png
MockUp2-AdvanceSearch.png
MockUp1-TimeSlider.png

Attaching the Mock Up UIs for the Dashboard of Phoenix Tracing Information.

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
> Attachments: MockUp1-TimeSlider.png, MockUp2-AdvanceSearch.png, 
> MockUp3-PatternDetector.png, MockUp4-FlameGraph.png
>
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread Ayola Jayamaha
Hi,

I created the Mock Up UIs for the Visualization of Phoenix Tracing
Information.
Below is a small description of each Mock Up UI.

1.
https://issues.apache.org/jira/secure/attachment/12726863/MockUp1-TimeSlider.png
This features a time slider. When you select a particular time period by
the slider you can view it's parents,dependency graphs etc.

2.
https://issues.apache.org/jira/secure/attachment/12726864/MockUp2-AdvanceSearch.png

In this Mock Up UI you can do an advance search based on attributes of the
trace.

3.
https://issues.apache.org/jira/secure/attachment/12726865/MockUp3-PatternDetector.png
In this Mock Up UI you can add new patterns to the system and also you can
do a search on a previously added pattern.

4.
https://issues.apache.org/jira/secure/attachment/12726866/MockUp4-FlameGraph.png
Interactive Flame Chart can be viewed on this Mock Up UI.

Your feedback would be much appreciated.
Thanks.
Best Regards,
Nishani.

On Tue, Apr 21, 2015 at 4:35 PM, Nishani (JIRA)  wrote:

>
> [
> https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504773#comment-14504773
> ]
>
> Nishani  commented on PHOENIX-1118:
> ---
>
> Hi,
>
> We can also use Zipkin to visualize tracing data[1]. HTrace  provides
> ZipkinSpanReceiver which converts spans to Zipkin span format and send them
> to Zipkin server[2]. Therefore HBase,  on top of which Phoenix runs, uses
> HTrace and also Zipkin. What are the ideas for using Zipkin distributed
> tracing system for visualizing Phoenix tracing information?
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-1119
> [2] http://people.apache.org/~ndimiduk/site/book/tracing.html
>
> Thanks.
>
> > Provide a tool for visualizing Phoenix tracing information
> > --
> >
> > Key: PHOENIX-1118
> > URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> > Project: Phoenix
> >  Issue Type: Sub-task
> >Reporter: James Taylor
> >Assignee: Nishani
> >  Labels: Java, SQL, Visualization, gsoc2015, mentor
> >
> > Currently there's no means of visualizing the trace information provided
> by Phoenix. We should provide some simple charting over our metrics tables.
> Take a look at the following JIRA for sample queries:
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>



-- 
Best Regards,
Ayola Jayamaha
http://ayolajayamaha.blogspot.com/


[jira] [Commented] (PHOENIX-1897) Use physical table name as key in top level map for MutationState

2015-04-21 Thread Serhiy Bilousov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505043#comment-14505043
 ] 

Serhiy Bilousov commented on PHOENIX-1897:
--

bq. for views, get the physical table name

I would suspect in most cases VIEW would be base of multiple tables joined 
together.
In such case what is "physical table" ?

> Use physical table name as key in top level map for MutationState
> -
>
> Key: PHOENIX-1897
> URL: https://issues.apache.org/jira/browse/PHOENIX-1897
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>
> We're currently using TableRef as the key for the top level map in 
> MutationState for uncommitted data. With the addition of transactions, there 
> are times we pass a TableRef from a SELECT statement which may have an alias. 
> This forces us to work around this for the equality checks by creating a new 
> TableRef with a null alias.
> We really should be using the physical table name as a key instead. Updates 
> to views would naturally fold into the same set of updates which is what we 
> want. Also, for indexes, we should map back to the physical table name 
> through the following logic:
> - for global indexes: get the parent table name
> - for local or shared indexes, get the physical table name and extract the 
> physical table name from the name
> - for views, get the physical table name
> - for tables, get the physical table name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505215#comment-14505215
 ] 

Nick Dimiduk commented on PHOENIX-1118:
---

I'm not sure how many folks have a zipkin deployment, so I don't know how 
widely useful integration there will be. Further, zipkin's underlying tools are 
kind of a mirror image of HTrace's, so it's spans need to be converted from one 
format to the other before loading them from the Phoenix table into a Zipkin 
span host. I think it's better to stick with as few operational dependencies as 
possible.

The HTrace project has been working on a web UI, I think a bunch of pieces will 
be released in 3.2.0. There was a meeting [0] to discuss this topic a couple 
weeks ago. Since HTrace is already a dependency, maybe we can focus on making 
the content of Phoenix's spans table available to that UI?

[0]: 
http://mail-archives.apache.org/mod_mbox/htrace-dev/201504.mbox/%3cca+qbeuny9f9zloutoiefb9lekzvbkmkwzrjw52_5-u8jka+...@mail.gmail.com%3e

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
> Attachments: MockUp1-TimeSlider.png, MockUp2-AdvanceSearch.png, 
> MockUp3-PatternDetector.png, MockUp4-FlameGraph.png
>
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505242#comment-14505242
 ] 

Nick Dimiduk commented on PHOENIX-1118:
---

Let's start with some simple objectives we can work toward. It sounds like a 
web-based tool is preferred for exposing tracing data, so [~mujtabachohan]'s 
comment about introducing a jetty service makes sense. Your existing experience 
with javascript/angular will be great for this approach, and probably 
complements well any integration with HTrace's own UI [0]. Maybe it's worth 
while for you to send a mail to the htrace dev list, introduce yourself and 
your project, and solicit suggestions from folks who've been working on a 
related problem.

For a starting deliverable/demo, I think something closest to your "advanced 
search" mockup looks like a good starting point. Provide a UI that lets the 
user input a parent spanId, retrieve all the spans from the table, and render 
them. Does that sound like a good first step to you? How does such an 
end-to-end objective align with the GSoC milestone timelines?

Nice work [~nishani], keep at it.

[0]: 
https://git-wip-us.apache.org/repos/asf?p=incubator-htrace.git;a=tree;f=htrace-core/src/web;h=6726345592f41c905698d471ce5113d61adf81df;hb=HEAD

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
> Attachments: MockUp1-TimeSlider.png, MockUp2-AdvanceSearch.png, 
> MockUp3-PatternDetector.png, MockUp4-FlameGraph.png
>
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505248#comment-14505248
 ] 

stack commented on PHOENIX-1118:


bq. ...maybe we can focus on making the content of Phoenix's spans table 
available to that UI?

HTrace UI is making progress. The implementation is still developing so can be 
influenced in such a way as to service phoenix needs.

Zipkin is a heavy-dependency that is not well supported and cryptic to get 
working (though when it does work, it is 'nice')

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
> Attachments: MockUp1-TimeSlider.png, MockUp2-AdvanceSearch.png, 
> MockUp3-PatternDetector.png, MockUp4-FlameGraph.png
>
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505255#comment-14505255
 ] 

Cody Marcel commented on PHOENIX-1873:
--

The setters are used by JAXB to bind the xml, there's no explicit references to 
those methods, but they are used. Perhaps we should relax this restriction? 
TBH, I think Lars is correct as suppressing the warning is probably not the 
best approach.

> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Attachments: PHOENIX-1873.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: It's time to create branch for 1.0.x and/or 1.1.x

2015-04-21 Thread James Taylor
For a Phoenix release on HBase 1.1, I think it's important to complete
the work to be on supported HBase APIs (PHOENIX-1681, PHOENIX-1717).

Thanks,
James

On Tue, Apr 21, 2015 at 1:31 AM, rajeshb...@apache.org
 wrote:
> Hi Team,
>
> I think this is better time to create 4.4 branch to work with 1.0.x.
>
> Also HBase 1.1 also very close to release we can have 4.4 branch to work
> with it.
> I expect very minimal/no changes required for it.
> So that we can release 4.4 with all three major versions of HBase.
>
>
> What do you think?
>
>
> Thanks,
> Rajeshbabu.


[jira] [Updated] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1873:
--
Attachment: PHOENIX-1873_v2.patch

The intention is to fix the compiler errors so that we don't have to field any 
more questions on how to work around the fact that our project no longer 
compiles with the documented IDE settings. This patch that adds to yours, 
[~cody.mar...@gmail.com] which just adds getters for these private member 
variables and fixes the other compiler error by using a local variable.

[~ndimiduk] is working on getting a pre-check in in place that'll enforce our 
settings at build and commit time which will prevent us from getting in this 
situation in the first place.

[~mujtabachohan] or [~cody.mar...@gmail.com] - please review and commit if this 
looks ok.

> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Attachments: PHOENIX-1873.patch, PHOENIX-1873_v2.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking of RC on Thursday

2015-04-21 Thread James Taylor
You're welcome (and Samarth did the work). Thanks,

James

On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org
 wrote:
> That's really great work James. Thanks for pointing.
>
> On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
> wrote:
>
>> Good list, Rajeshbabu. Thanks for starting the RC process. One more of
>> note that's already in:
>>
>> - 7.5x performance improvement for non aggregate, unordered queries
>> (PHOENIX-1779).
>>
>> Thanks,
>> James
>>
>> On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
>>  wrote:
>> > That's good to have Eli. I have marked 4.4.0 as fix version for the JIRA.
>> >
>> > Thanks,
>> > Rajeshbabu.
>> >
>> > On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine  wrote:
>> >
>> >> Rajesh, I'm harboring hopes of getting PHOENIX-900 completed by
>> Thursday.
>> >> Hopefully it'll end up in 4.4. I'll keep you posted.
>> >>
>> >> Thanks
>> >>
>> >> Eli
>> >>
>> >> On Mon, Apr 20, 2015 at 1:42 PM, rajeshb...@apache.org <
>> >> chrajeshbab...@gmail.com> wrote:
>> >>
>> >> > I'd like to propose we can have 4.4.0 RC on Thursday.
>> >> > We have got a lot of great stuff in 4.4.0 already:
>> >> > - 60 bug fixed(which includes fixes from 4.3.1)
>> >> > - Spart integration
>> >> > - Query server
>> >> > - Union All support
>> >> > - Pherf - load tester measures throughput
>> >> > - Many math and date/time buit-in functions
>> >> > - MR job to populate indexes
>> >> > - Support for 1.0.x (create new 4.4.0 branch for this)
>> >> >
>> >> > - PHOENIX-538 Support UDFs JIRA is very close.
>> >> >
>> >> > Is there any others that we should try to get in?
>> >> >
>> >> > Thanks,
>> >> > Rajeshbabu.
>> >> >
>> >>
>>


[jira] [Commented] (PHOENIX-331) Hive Storage

2015-04-21 Thread nicolas maillard (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505436#comment-14505436
 ] 

nicolas maillard commented on PHOENIX-331:
--

Yes, makes sense and I wanted to get a couple more UT in there. Sorry for the 
trouble, I'll do a pull request asap

> Hive Storage
> 
>
> Key: PHOENIX-331
> URL: https://issues.apache.org/jira/browse/PHOENIX-331
> Project: Phoenix
>  Issue Type: Task
>Reporter: nicolas maillard
>Assignee: nicolas maillard
>  Labels: enhancement
> Attachments: PHOENIX-331.patch
>
>
> I see a pig storage has been added it would be a great feature for a hive one 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1813) Support reading your own writes

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1813.
---
Resolution: Fixed

> Support reading your own writes
> ---
>
> Key: PHOENIX-1813
> URL: https://issues.apache.org/jira/browse/PHOENIX-1813
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>
> Given our transaction support, we can read our own writes by sending any 
> pending updates to the server (MutationState.send()) prior to executing a 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1897) Use physical table name as key in top level map for MutationState

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505459#comment-14505459
 ] 

James Taylor commented on PHOENIX-1897:
---

[~sergey.b] - a VIEW over multiple tables is currently not yet supported, but 
even if it was, it would not be updatable so this would not be relevant. The 
MutationState is an internal object that tracks the state of uncommitted rows.

> Use physical table name as key in top level map for MutationState
> -
>
> Key: PHOENIX-1897
> URL: https://issues.apache.org/jira/browse/PHOENIX-1897
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>
> We're currently using TableRef as the key for the top level map in 
> MutationState for uncommitted data. With the addition of transactions, there 
> are times we pass a TableRef from a SELECT statement which may have an alias. 
> This forces us to work around this for the equality checks by creating a new 
> TableRef with a null alias.
> We really should be using the physical table name as a key instead. Updates 
> to views would naturally fold into the same set of updates which is what we 
> want. Also, for indexes, we should map back to the physical table name 
> through the following logic:
> - for global indexes: get the parent table name
> - for local or shared indexes, get the physical table name and extract the 
> physical table name from the name
> - for views, get the physical table name
> - for tables, get the physical table name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505488#comment-14505488
 ] 

Nick Dimiduk commented on PHOENIX-1873:
---

bq. Nick Dimiduk is working on getting a pre-check in in place that'll enforce 
our settings at build and commit time which will prevent us from getting in 
this situation in the first place.

Yes, that's true. For that change, I intend to integrate check style plugin 
into our maven build for catching things like referenced local variables and so 
on. Once that change is in, we should remove the IDE-specific stuff because 
maven is the project build tool, not Eclipse.

> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Attachments: PHOENIX-1873.patch, PHOENIX-1873_v2.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505505#comment-14505505
 ] 

Cody Marcel commented on PHOENIX-1873:
--

Are you convinced this patch is what we should do? The alternative is changing 
the preferences file to relax this constraint. It's not a compile error, its a 
code style thing. There are legitimate reasons the setters are unreferenced. 

> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Attachments: PHOENIX-1873.patch, PHOENIX-1873_v2.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505525#comment-14505525
 ] 

James Taylor commented on PHOENIX-1873:
---

Yes, I'm convinced. There are good reasons to have strict checks in place.

> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Attachments: PHOENIX-1873.patch, PHOENIX-1873_v2.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1835) Adjust MetaDataEndPointImpl timestamps if table is transactional

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14503833#comment-14503833
 ] 

James Taylor edited comment on PHOENIX-1835 at 4/21/15 7:31 PM:


FYI, Tephra doesn't support reads over multiple versions currently (see 
TEPHRA-88), so until they do we should just raise an exception if a 
transactional table is queried when there's a CURRENT_SCN specified on the 
connection. For now, let's just implement PHOENIX-1898 to detect and throw 
instead.


was (Author: jamestaylor):
FYI, Tephra doesn't support reads over multiple versions currently (see 
TEPHRA-88), so until they do we should just raise an exception if a 
transactional table is queried when there's a CURRENT_SCN specified on the 
connection.

> Adjust MetaDataEndPointImpl timestamps if table is transactional
> 
>
> Key: PHOENIX-1835
> URL: https://issues.apache.org/jira/browse/PHOENIX-1835
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>
> Phoenix correlates table metadata with the table data based on timestamp. 
> Since Tephra is adjusting timestamps for the data, we need to do the same for 
> the metadata operations (which aren't transactional through Tephra). Take a 
> look at MetaDataEndPointImpl and the MetaDataMutationResult where we return 
> the server timestamp (i.e. MetaDataMutationResult.getTable() for example). 
> This timestamp should be run through the TransactionUtil.translateTimestamp() 
> method).
> Add a point-in-time test with a table being altered, but your connection 
> being before that time (with CURRENT_SCN) as a test. We'll need to make sure 
> the Puts to the SYSTEM.CATALOG get timestamped correctly (but I think the 
> above will cause that).
> Also, my other hack in PostDDLCompiler, should not be necessary after this:
> {code}
> // FIXME: DDL operations aren't transactional, so 
> we're basing the timestamp on a server timestamp.
> // Not sure what the fix should be. We don't need 
> conflict detection nor filtering of invalid transactions
> // in this case, so maybe this is ok.
> if (tableRef.getTable().isTransactional()) {
> ts = TransactionUtil.translateMillis(ts);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1831) Ensure that a point delete from the data table ends up as a point delete in the index table

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505571#comment-14505571
 ] 

James Taylor commented on PHOENIX-1831:
---

This has been fixed, but we still need a test to verify it.

> Ensure that a point delete from the data table ends up as a point delete in 
> the index table
> ---
>
> Key: PHOENIX-1831
> URL: https://issues.apache.org/jira/browse/PHOENIX-1831
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> Currently Phoenix only issues row deletes and column deletes, but never point 
> deletes. With Tephra, once row deletes are handled, we're going to have to 
> handle point deletes correctly. These will occur when a transaction needs to 
> be aborted after a conflict is detected. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1898) Throw error if CURRENT_SCN is set on connection and an attempt is made to start a transaction

2015-04-21 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1898:
-

 Summary: Throw error if CURRENT_SCN is set on connection and an 
attempt is made to start a transaction
 Key: PHOENIX-1898
 URL: https://issues.apache.org/jira/browse/PHOENIX-1898
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Thomas D'Silva


Until Tephra supports multiple cell versions 
(https://issues.cask.co/browse/TEPHRA-88), we should throw an exception on an 
attempt to start a transaction if connection.getSCN() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1880) Connections from QueryUtil.getConnection don't work on secure clusters

2015-04-21 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505646#comment-14505646
 ] 

Gabriel Reid commented on PHOENIX-1880:
---

Patch looks good to me. 

[~gjacoby] have you been able to test this on a secure cluster? I don't have 
one available to test on right now.

> Connections from QueryUtil.getConnection don't work on secure clusters
> --
>
> Key: PHOENIX-1880
> URL: https://issues.apache.org/jira/browse/PHOENIX-1880
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0, 4.4.0
>Reporter: Geoffrey Jacoby
>  Labels: patch
> Fix For: 4.4.0
>
> Attachments: PHOENIX-1880.patch
>
>
> QueryUtil.getConnection(Configuration) and 
> QueryUtil.getConnection(Properties, Configuration) both only take the 
> zookeeper quorum from the Configuration, and drop any other properties on the 
> config object. In order to connect to secure HBase clusters, more properties 
> are needed. This is a similar problem to PHOENIX-1078, and the likely fix is 
> similar: copy the configuration parameters into the Properties object before 
> using it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1888) [build] add pre-commit scripts

2015-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505675#comment-14505675
 ] 

Nick Dimiduk commented on PHOENIX-1888:
---

[~jfarrell] says he's hooked up phoenix to be included in pre-commit checking. 
Let me see what's broken.

Also, I just learned about HADOOP-11746, so maybe we'll fork Hadoop's fancy new 
scripts rather than HBase's.

> [build] add pre-commit scripts
> --
>
> Key: PHOENIX-1888
> URL: https://issues.apache.org/jira/browse/PHOENIX-1888
> Project: Phoenix
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 1888.patch
>
>
> Let's get [PreCommit-PHOENIX-Build 
> |https://builds.apache.org/job/PreCommit-PHOENIX-Build/] working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1878) Implement PhoenixSchema and PhoenixTable in Phoenix/Calcite Integration

2015-04-21 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505727#comment-14505727
 ] 

Julian Hyde commented on PHOENIX-1878:
--

PhoenixSchema.getTableNames is being called because the schema's cache is 
enabled. The first time you ask for a table, it will build a cache with all 
tables, and for that it needs all of the keys.

In theory, if you disable the cache by calling 
{{SchemaPlus.setCacheEnabled(false)}} or by specifying {{cache: false}} in your 
model (in e.g. CalciteTest.connectUsingModel), then getTableNames will not be 
called. However, due to CALCITE-690 it seems to be always called.

> Implement PhoenixSchema and PhoenixTable in Phoenix/Calcite Integration
> ---
>
> Key: PHOENIX-1878
> URL: https://issues.apache.org/jira/browse/PHOENIX-1878
> Project: Phoenix
>  Issue Type: Task
>Reporter: Maryann Xue
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1830) Transactional mutable secondary indexes

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1830.
---
Resolution: Fixed

> Transactional mutable secondary indexes
> ---
>
> Key: PHOENIX-1830
> URL: https://issues.apache.org/jira/browse/PHOENIX-1830
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-1830-wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1831) Ensure that a point delete from the data table ends up as a point delete in the index table

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1831.
---
Resolution: Fixed

> Ensure that a point delete from the data table ends up as a point delete in 
> the index table
> ---
>
> Key: PHOENIX-1831
> URL: https://issues.apache.org/jira/browse/PHOENIX-1831
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> Currently Phoenix only issues row deletes and column deletes, but never point 
> deletes. With Tephra, once row deletes are handled, we're going to have to 
> handle point deletes correctly. These will occur when a transaction needs to 
> be aborted after a conflict is detected. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-1899:
-

 Summary: Performance regression for non-aggregate, unordered 
queries returning 0 or few records
 Key: PHOENIX-1899
 URL: https://issues.apache.org/jira/browse/PHOENIX-1899
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain


Details on here:

Apache Phoenix Performance
Suite: standard
Comparison: 
v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
Results: http://phoenix-bin.github.io/client/performance/latest.htm
History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1900) Increase testing around transaction integration

2015-04-21 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1900:
-

 Summary: Increase testing around transaction integration
 Key: PHOENIX-1900
 URL: https://issues.apache.org/jira/browse/PHOENIX-1900
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Thomas D'Silva


Read your own writes testing:
- UPSERT SELECT when there's uncommitted data being selected.
- Aggregate queries when there's uncommitted data (see FIXME in 
TransactionIT.testDelete).
- Mix of transactional and non transactional data (ensure that non 
transactional data is not accidently written).

Secondary indexes:
- Ensure writes to local/global mutable/immutable indexes are undone correctly 
and index is left in a valid state when transactions overlap. In particular, 
we'll want to test after the index data has been written to HBase already in a 
case that requires point deletes for an index row. For example, if a covered 
column has an existing value that's updated, a point delete would be required 
to get rid of it make the earlier covered column value visible again.
- Ensure that in the event of a failure that cannot be aborted by the client, 
that index rows are correctly filtered when it's used in a query.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1879) Provide prepare statement hooks for Phoenix/Calcite Integration

2015-04-21 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505785#comment-14505785
 ] 

Julian Hyde commented on PHOENIX-1879:
--

In your driver, override org.apache.calcite.jdbc.Driver.createPrepareFactory() 
to return [a function that returns] your own sub-class of CalcitePrepareImpl 
called say PhoenixPrepareImpl. In PhoenixPrepareImpl, you can override methods 
such as createPlanner.

The RelOptCluster is created before createPlanner is called, but you can call 
its setMetadataProvider method in your override of createPlanner.

> Provide prepare statement hooks for Phoenix/Calcite Integration
> ---
>
> Key: PHOENIX-1879
> URL: https://issues.apache.org/jira/browse/PHOENIX-1879
> Project: Phoenix
>  Issue Type: Task
>Reporter: Maryann Xue
>
> This is where we can setup RelOptCluster with a PhoenixRelMetadataProvider, 
> add/or remove RelOptRules, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1901) Performance test transaction implementation

2015-04-21 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1901:
-

 Summary: Performance test transaction implementation
 Key: PHOENIX-1901
 URL: https://issues.apache.org/jira/browse/PHOENIX-1901
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Thomas D'Silva


Placeholder for performance work to validate transaction implementation scales 
well
- First set of tests can be to run regression suite with default set to make 
tables transactional (compared against standard run).
- Second test would be to use Pherf for a standard platform workload.
- Third test would be to use a workload with mutable data to see how well 
transactions scale.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1902) Do not perform conflict detection for append-only tables

2015-04-21 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1902:
-

 Summary: Do not perform conflict detection for append-only tables
 Key: PHOENIX-1902
 URL: https://issues.apache.org/jira/browse/PHOENIX-1902
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Thomas D'Silva


When a table is declared as write-once/append-only (IMMUTABLE_ROWS=true), then 
we should disable the conflict detection being done by Tephra as there can be 
no conflicts. This is a much lighter weight model that relies on Tephra mainly 
to:
- filter rows for failed (and unabortable) transactions.
- not show transactional data until it has successfully been committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1903) bin scripts locate wrong client jar

2015-04-21 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created PHOENIX-1903:
-

 Summary: bin scripts locate wrong client jar
 Key: PHOENIX-1903
 URL: https://issues.apache.org/jira/browse/PHOENIX-1903
 Project: Phoenix
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


After PHOENIX-971, the bin scripts like sqlline.py will sometimes produce a 
classpath containing thin-client.jar instead of client.jar. Sometimes, because 
os.walk makes no ordering guarantees and the implementation appears 
os-dependent. Fix this so matching works consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1903) bin scripts locate wrong client jar

2015-04-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated PHOENIX-1903:
--
Attachment: 1903.patch

Simple patch orders the files before inspecting them vs the match string. It 
happens that the "thick" jar name comes before the "thin" jar alphabetically, 
so this will work.

> bin scripts locate wrong client jar
> ---
>
> Key: PHOENIX-1903
> URL: https://issues.apache.org/jira/browse/PHOENIX-1903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 1903.patch
>
>
> After PHOENIX-971, the bin scripts like sqlline.py will sometimes produce a 
> classpath containing thin-client.jar instead of client.jar. Sometimes, 
> because os.walk makes no ordering guarantees and the implementation appears 
> os-dependent. Fix this so matching works consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread Ivan Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505890#comment-14505890
 ] 

Ivan Weiss commented on PHOENIX-1682:
-

[~giacomotaylor] What should I do to move this pull request forward?

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505902#comment-14505902
 ] 

James Taylor commented on PHOENIX-1682:
---

bq. What should I do to move this pull request forward?
You just did it. :-) Patch looks good. I'll get this committed to 4.x and 
master shortly. Thanks so much for the contribution, [~ivanweiss].

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-505) Optimize MIN(col) when col leads row key

2015-04-21 Thread Dongming Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505909#comment-14505909
 ] 

Dongming Liang commented on PHOENIX-505:


Will work on this issue.

> Optimize MIN(col) when col leads row key
> 
>
> Key: PHOENIX-505
> URL: https://issues.apache.org/jira/browse/PHOENIX-505
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>
> We should be able to change an aggregate query to a scan with LIMIT 1 if the 
> only aggregation is MIN(value) when the index is on value ASC and MAX(value) 
> when the index is on value DESC. Might be able to optimize the reverse too, 
> by finding the largest possible value, but need to test this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-505) Optimize MIN(col) when col leads row key

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-505:
-
Assignee: Dongming Liang

> Optimize MIN(col) when col leads row key
> 
>
> Key: PHOENIX-505
> URL: https://issues.apache.org/jira/browse/PHOENIX-505
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Dongming Liang
>
> We should be able to change an aggregate query to a scan with LIMIT 1 if the 
> only aggregation is MIN(value) when the index is on value ASC and MAX(value) 
> when the index is on value DESC. Might be able to optimize the reverse too, 
> by finding the largest possible value, but need to test this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1903) bin scripts locate wrong client jar

2015-04-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated PHOENIX-1903:
--
Fix Version/s: 4.4.0
   5.0.0

> bin scripts locate wrong client jar
> ---
>
> Key: PHOENIX-1903
> URL: https://issues.apache.org/jira/browse/PHOENIX-1903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.4.0
>
> Attachments: 1903.patch
>
>
> After PHOENIX-971, the bin scripts like sqlline.py will sometimes produce a 
> classpath containing thin-client.jar instead of client.jar. Sometimes, 
> because os.walk makes no ordering guarantees and the implementation appears 
> os-dependent. Fix this so matching works consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505950#comment-14505950
 ] 

Mujtaba Chohan commented on PHOENIX-1873:
-

Patch LGTM.

> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Attachments: PHOENIX-1873.patch, PHOENIX-1873_v2.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1903) bin scripts locate wrong client jar

2015-04-21 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505975#comment-14505975
 ] 

Devaraj Das commented on PHOENIX-1903:
--

+1

> bin scripts locate wrong client jar
> ---
>
> Key: PHOENIX-1903
> URL: https://issues.apache.org/jira/browse/PHOENIX-1903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.4.0
>
> Attachments: 1903.patch
>
>
> After PHOENIX-971, the bin scripts like sqlline.py will sometimes produce a 
> classpath containing thin-client.jar instead of client.jar. Sometimes, 
> because os.walk makes no ordering guarantees and the implementation appears 
> os-dependent. Fix this so matching works consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1904) bin scripts user incorrect variable for locating hbase conf

2015-04-21 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created PHOENIX-1904:
-

 Summary: bin scripts user incorrect variable for locating hbase 
conf
 Key: PHOENIX-1904
 URL: https://issues.apache.org/jira/browse/PHOENIX-1904
 Project: Phoenix
  Issue Type: Bug
Reporter: Nick Dimiduk


in bin/phoenix_utils.py, we're building a classpath based on an incorrect 
environment variable. 'HBASE_CONF_PATH' is used:

https://github.com/apache/phoenix/blob/master/bin/phoenix_utils.py#L68

 While bigtop is registering 'HBASE_CONF_DIR' for us.

https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hbase/hbase.default#L17

There's even a local work-around for this problem in end2endTest.py

https://github.com/apache/phoenix/blob/master/bin/end2endTest.py#L37



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1904) bin scripts user incorrect variable for locating hbase conf

2015-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506012#comment-14506012
 ] 

Nick Dimiduk commented on PHOENIX-1904:
---

I think we should fix phoenix_utils.py to reference HBASE_CONF_DIR. If we must 
maintain backward compatibility, we can check HBASE_CONF_PATH first, then 
HBASE_CONF_DIR, and then default to pwd. 

> bin scripts user incorrect variable for locating hbase conf
> ---
>
> Key: PHOENIX-1904
> URL: https://issues.apache.org/jira/browse/PHOENIX-1904
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>
> in bin/phoenix_utils.py, we're building a classpath based on an incorrect 
> environment variable. 'HBASE_CONF_PATH' is used:
> https://github.com/apache/phoenix/blob/master/bin/phoenix_utils.py#L68
>  While bigtop is registering 'HBASE_CONF_DIR' for us.
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hbase/hbase.default#L17
> There's even a local work-around for this problem in end2endTest.py
> https://github.com/apache/phoenix/blob/master/bin/end2endTest.py#L37



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1904) bin scripts user incorrect variable for locating hbase conf

2015-04-21 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506013#comment-14506013
 ] 

Nick Dimiduk commented on PHOENIX-1904:
---

cc [~devaraj], [~gabriel.reid]

> bin scripts user incorrect variable for locating hbase conf
> ---
>
> Key: PHOENIX-1904
> URL: https://issues.apache.org/jira/browse/PHOENIX-1904
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>
> in bin/phoenix_utils.py, we're building a classpath based on an incorrect 
> environment variable. 'HBASE_CONF_PATH' is used:
> https://github.com/apache/phoenix/blob/master/bin/phoenix_utils.py#L68
>  While bigtop is registering 'HBASE_CONF_DIR' for us.
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hbase/hbase.default#L17
> There's even a local work-around for this problem in end2endTest.py
> https://github.com/apache/phoenix/blob/master/bin/end2endTest.py#L37



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking of RC on Thursday

2015-04-21 Thread James Taylor
Another couple that need to go into 4.4.0 release IMO are PHOENIX-1728
(Pherf - Make tests use mini cluster so that unit test run at build
time) and PHOENIX-1727 (Pherf - Port shell scripts to python).
Thanks,
James

On Tue, Apr 21, 2015 at 11:19 AM, James Taylor  wrote:
> You're welcome (and Samarth did the work). Thanks,
>
> James
>
> On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org
>  wrote:
>> That's really great work James. Thanks for pointing.
>>
>> On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
>> wrote:
>>
>>> Good list, Rajeshbabu. Thanks for starting the RC process. One more of
>>> note that's already in:
>>>
>>> - 7.5x performance improvement for non aggregate, unordered queries
>>> (PHOENIX-1779).
>>>
>>> Thanks,
>>> James
>>>
>>> On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
>>>  wrote:
>>> > That's good to have Eli. I have marked 4.4.0 as fix version for the JIRA.
>>> >
>>> > Thanks,
>>> > Rajeshbabu.
>>> >
>>> > On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine  wrote:
>>> >
>>> >> Rajesh, I'm harboring hopes of getting PHOENIX-900 completed by
>>> Thursday.
>>> >> Hopefully it'll end up in 4.4. I'll keep you posted.
>>> >>
>>> >> Thanks
>>> >>
>>> >> Eli
>>> >>
>>> >> On Mon, Apr 20, 2015 at 1:42 PM, rajeshb...@apache.org <
>>> >> chrajeshbab...@gmail.com> wrote:
>>> >>
>>> >> > I'd like to propose we can have 4.4.0 RC on Thursday.
>>> >> > We have got a lot of great stuff in 4.4.0 already:
>>> >> > - 60 bug fixed(which includes fixes from 4.3.1)
>>> >> > - Spart integration
>>> >> > - Query server
>>> >> > - Union All support
>>> >> > - Pherf - load tester measures throughput
>>> >> > - Many math and date/time buit-in functions
>>> >> > - MR job to populate indexes
>>> >> > - Support for 1.0.x (create new 4.4.0 branch for this)
>>> >> >
>>> >> > - PHOENIX-538 Support UDFs JIRA is very close.
>>> >> >
>>> >> > Is there any others that we should try to get in?
>>> >> >
>>> >> > Thanks,
>>> >> > Rajeshbabu.
>>> >> >
>>> >>
>>>


Re: Thinking of RC on Thursday

2015-04-21 Thread Samarth Jain
I would also like to get in
https://issues.apache.org/jira/browse/PHOENIX-1819 (Report resource/metrics
at per phoenix request level) and
https://issues.apache.org/jira/browse/PHOENIX-1899.

On Tue, Apr 21, 2015 at 4:32 PM, James Taylor 
wrote:

> Another couple that need to go into 4.4.0 release IMO are PHOENIX-1728
> (Pherf - Make tests use mini cluster so that unit test run at build
> time) and PHOENIX-1727 (Pherf - Port shell scripts to python).
> Thanks,
> James
>
> On Tue, Apr 21, 2015 at 11:19 AM, James Taylor 
> wrote:
> > You're welcome (and Samarth did the work). Thanks,
> >
> > James
> >
> > On Tue, Apr 21, 2015 at 1:19 AM, rajeshb...@apache.org
> >  wrote:
> >> That's really great work James. Thanks for pointing.
> >>
> >> On Tue, Apr 21, 2015 at 11:47 AM, James Taylor 
> >> wrote:
> >>
> >>> Good list, Rajeshbabu. Thanks for starting the RC process. One more of
> >>> note that's already in:
> >>>
> >>> - 7.5x performance improvement for non aggregate, unordered queries
> >>> (PHOENIX-1779).
> >>>
> >>> Thanks,
> >>> James
> >>>
> >>> On Mon, Apr 20, 2015 at 2:02 PM, rajeshb...@apache.org
> >>>  wrote:
> >>> > That's good to have Eli. I have marked 4.4.0 as fix version for the
> JIRA.
> >>> >
> >>> > Thanks,
> >>> > Rajeshbabu.
> >>> >
> >>> > On Tue, Apr 21, 2015 at 2:27 AM, Eli Levine 
> wrote:
> >>> >
> >>> >> Rajesh, I'm harboring hopes of getting PHOENIX-900 completed by
> >>> Thursday.
> >>> >> Hopefully it'll end up in 4.4. I'll keep you posted.
> >>> >>
> >>> >> Thanks
> >>> >>
> >>> >> Eli
> >>> >>
> >>> >> On Mon, Apr 20, 2015 at 1:42 PM, rajeshb...@apache.org <
> >>> >> chrajeshbab...@gmail.com> wrote:
> >>> >>
> >>> >> > I'd like to propose we can have 4.4.0 RC on Thursday.
> >>> >> > We have got a lot of great stuff in 4.4.0 already:
> >>> >> > - 60 bug fixed(which includes fixes from 4.3.1)
> >>> >> > - Spart integration
> >>> >> > - Query server
> >>> >> > - Union All support
> >>> >> > - Pherf - load tester measures throughput
> >>> >> > - Many math and date/time buit-in functions
> >>> >> > - MR job to populate indexes
> >>> >> > - Support for 1.0.x (create new 4.4.0 branch for this)
> >>> >> >
> >>> >> > - PHOENIX-538 Support UDFs JIRA is very close.
> >>> >> >
> >>> >> > Is there any others that we should try to get in?
> >>> >> >
> >>> >> > Thanks,
> >>> >> > Rajeshbabu.
> >>> >> >
> >>> >>
> >>>
>


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506042#comment-14506042
 ] 

James Taylor commented on PHOENIX-1682:
---

Committed to 4.x and master. [~samarthjain] - seems like a good one for 4.3 
branch too - what do you think?

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1905) Update pom for 4.x branch to 0.98.12

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1905:
--
Attachment: PHOENIX-1905.patch

> Update pom for 4.x branch to 0.98.12
> 
>
> Key: PHOENIX-1905
> URL: https://issues.apache.org/jira/browse/PHOENIX-1905
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-1905.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1905) Update pom for 4.x branch to 0.98.12

2015-04-21 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1905:
-

 Summary: Update pom for 4.x branch to 0.98.12
 Key: PHOENIX-1905
 URL: https://issues.apache.org/jira/browse/PHOENIX-1905
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Attachments: PHOENIX-1905.patch





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1905) Update pom for 4.x branch to 0.98.12

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506045#comment-14506045
 ] 

James Taylor commented on PHOENIX-1905:
---

[~samarthjain] - look ok?

> Update pom for 4.x branch to 0.98.12
> 
>
> Key: PHOENIX-1905
> URL: https://issues.apache.org/jira/browse/PHOENIX-1905
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-1905.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: Surface partial saves in CommitExcepiton (PH...

2015-04-21 Thread elilevine
GitHub user elilevine reopened a pull request:

https://github.com/apache/phoenix/pull/37

Surface partial saves in CommitExcepiton (PHOENIX-900)

https://issues.apache.org/jira/browse/PHOENIX-900

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/elilevine/apache-phoenix PHOENIX-900

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/37.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #37


commit 10754f7ab71d62349acb2e18e79a91ded7bc6c62
Author: Eli Levine 
Date:   2015-02-06T00:28:33Z

First pass at impl. Not tested.

commit d8ea1db808434ca1d159d6316105d50dbd0bca82
Author: Eli Levine 
Date:   2015-02-10T01:40:28Z

First attempt at a new integ test for partial commits using a coproc. Still 
WIP.

commit 34302cfac39d3d893103a99886d10a3fc8e65d58
Author: Eli Levine 
Date:   2015-02-11T23:16:28Z

Test now successfully verifies partial upsert failure. Statement ordering 
still borked. Prolly need to swith to keeping order of statement execution 
instead of creation.

commit 6f7bc57646543a08222e652f264ee02b6b3e1241
Author: Eli Levine 
Date:   2015-02-12T00:41:34Z

Switched to keeping a count of statement executions instead of creations, 
which is ultimately used in MutationState. Basic partial commit test working.

commit 74e3ba6ce1bf12de62d1b3859e23b6ce224bd370
Author: Eli Levine 
Date:   2015-02-12T02:04:14Z

Modify PartialCommitIT to add select clause

commit 265b2e90056bca13b8ba59efba15841c9a8658ef
Author: Eli Levine 
Date:   2015-02-13T01:26:51Z

Separated getting and incrementing of PhoenixConnection's statement 
execution counter. More tests. Still WIP.

commit 0f8958b3debb2f391a230f51b29fe102b426c9be
Author: Eli Levine 
Date:   2015-02-15T22:55:21Z

Fix partial success w/ delete test

commit 29ae22f5c3bec7dbd659b16edcaa4904fd1c1cd8
Author: Eli Levine 
Date:   2015-02-16T04:43:03Z

Clarify that partial save statement tracking is only supported for 
operations on data, not metadata.

commit 5ee273cf36729fab0bf9613bfead5933c920bd35
Author: Eli Levine 
Date:   2015-02-16T19:55:30Z

Add data verification after partial saves to PartialCommitIT

commit 6619e44e20e0a923dbe817191ae9c06266856e7d
Author: Eli Levine 
Date:   2015-02-17T21:15:18Z

Carry statement counter in PhoenixConnection copy constructors. Add more 
partial commit testing around SELECT UPSERT. Formatting fixes.

commit bfe6a537a4cb1256962c679e36d1a33250623af6
Author: Eli Levine 
Date:   2015-02-17T21:27:14Z

Method name change for CommitException

commit 596e48f6e82fbaa0e21e26045a8b789882392a03
Author: Eli Levine 
Date:   2015-02-18T19:56:57Z

Switch from Set to int[] to hold statement indexes in MutationState

commit 5fa951973eeccea338632696de03d7256c0639a3
Author: Eli Levine 
Date:   2015-02-23T22:52:06Z

Switch back to using HashMap for MutationState.mutations and only use 
TreeMap for testing only.

commit b0e44da6d408fa40c1a53209a9a8a7f635ff43aa
Author: Eli Levine 
Date:   2015-02-23T23:28:41Z

Merge upstream changes

commit 48a939268ec0056c44d38d4cfe666a6cab9aa1cb
Author: Eli Levine 
Date:   2015-02-26T20:11:23Z

Slightly modify how MutationState.mutations is overriden in tests

commit 935ecc0d32aba8cda77bef05486649686a050fa9
Author: Eli Levine 
Date:   2015-02-27T04:25:16Z

Improve constructor chaining slightly in MutationState

commit 3b114ae584412099370dcde81aa87262e066e49f
Author: Eli Levine 
Date:   2015-02-27T04:37:07Z

Merge branch 'master' of https://github.com/apache/phoenix into PHOENIX-900

commit 374c51fe697d5dbba6dc972fcc8a29c11c213dce
Author: Eli Levine 
Date:   2015-04-21T22:28:34Z

Merge changes from upstream/master

commit 84ea3522707c9b83315e819419300cd2f3a4b27e
Author: Eli Levine 
Date:   2015-04-21T23:47:18Z

Fix bug around deletion mutations in MutationState.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1905) Update pom for 4.x branch to 0.98.12

2015-04-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506052#comment-14506052
 ] 

Samarth Jain commented on PHOENIX-1905:
---

+1

> Update pom for 4.x branch to 0.98.12
> 
>
> Key: PHOENIX-1905
> URL: https://issues.apache.org/jira/browse/PHOENIX-1905
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-1905.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506054#comment-14506054
 ] 

ASF GitHub Bot commented on PHOENIX-900:


GitHub user elilevine reopened a pull request:

https://github.com/apache/phoenix/pull/37

Surface partial saves in CommitExcepiton (PHOENIX-900)

https://issues.apache.org/jira/browse/PHOENIX-900

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/elilevine/apache-phoenix PHOENIX-900

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/37.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #37


commit 10754f7ab71d62349acb2e18e79a91ded7bc6c62
Author: Eli Levine 
Date:   2015-02-06T00:28:33Z

First pass at impl. Not tested.

commit d8ea1db808434ca1d159d6316105d50dbd0bca82
Author: Eli Levine 
Date:   2015-02-10T01:40:28Z

First attempt at a new integ test for partial commits using a coproc. Still 
WIP.

commit 34302cfac39d3d893103a99886d10a3fc8e65d58
Author: Eli Levine 
Date:   2015-02-11T23:16:28Z

Test now successfully verifies partial upsert failure. Statement ordering 
still borked. Prolly need to swith to keeping order of statement execution 
instead of creation.

commit 6f7bc57646543a08222e652f264ee02b6b3e1241
Author: Eli Levine 
Date:   2015-02-12T00:41:34Z

Switched to keeping a count of statement executions instead of creations, 
which is ultimately used in MutationState. Basic partial commit test working.

commit 74e3ba6ce1bf12de62d1b3859e23b6ce224bd370
Author: Eli Levine 
Date:   2015-02-12T02:04:14Z

Modify PartialCommitIT to add select clause

commit 265b2e90056bca13b8ba59efba15841c9a8658ef
Author: Eli Levine 
Date:   2015-02-13T01:26:51Z

Separated getting and incrementing of PhoenixConnection's statement 
execution counter. More tests. Still WIP.

commit 0f8958b3debb2f391a230f51b29fe102b426c9be
Author: Eli Levine 
Date:   2015-02-15T22:55:21Z

Fix partial success w/ delete test

commit 29ae22f5c3bec7dbd659b16edcaa4904fd1c1cd8
Author: Eli Levine 
Date:   2015-02-16T04:43:03Z

Clarify that partial save statement tracking is only supported for 
operations on data, not metadata.

commit 5ee273cf36729fab0bf9613bfead5933c920bd35
Author: Eli Levine 
Date:   2015-02-16T19:55:30Z

Add data verification after partial saves to PartialCommitIT

commit 6619e44e20e0a923dbe817191ae9c06266856e7d
Author: Eli Levine 
Date:   2015-02-17T21:15:18Z

Carry statement counter in PhoenixConnection copy constructors. Add more 
partial commit testing around SELECT UPSERT. Formatting fixes.

commit bfe6a537a4cb1256962c679e36d1a33250623af6
Author: Eli Levine 
Date:   2015-02-17T21:27:14Z

Method name change for CommitException

commit 596e48f6e82fbaa0e21e26045a8b789882392a03
Author: Eli Levine 
Date:   2015-02-18T19:56:57Z

Switch from Set to int[] to hold statement indexes in MutationState

commit 5fa951973eeccea338632696de03d7256c0639a3
Author: Eli Levine 
Date:   2015-02-23T22:52:06Z

Switch back to using HashMap for MutationState.mutations and only use 
TreeMap for testing only.

commit b0e44da6d408fa40c1a53209a9a8a7f635ff43aa
Author: Eli Levine 
Date:   2015-02-23T23:28:41Z

Merge upstream changes

commit 48a939268ec0056c44d38d4cfe666a6cab9aa1cb
Author: Eli Levine 
Date:   2015-02-26T20:11:23Z

Slightly modify how MutationState.mutations is overriden in tests

commit 935ecc0d32aba8cda77bef05486649686a050fa9
Author: Eli Levine 
Date:   2015-02-27T04:25:16Z

Improve constructor chaining slightly in MutationState

commit 3b114ae584412099370dcde81aa87262e066e49f
Author: Eli Levine 
Date:   2015-02-27T04:37:07Z

Merge branch 'master' of https://github.com/apache/phoenix into PHOENIX-900

commit 374c51fe697d5dbba6dc972fcc8a29c11c213dce
Author: Eli Levine 
Date:   2015-04-21T22:28:34Z

Merge changes from upstream/master

commit 84ea3522707c9b83315e819419300cd2f3a4b27e
Author: Eli Levine 
Date:   2015-04-21T23:47:18Z

Fix bug around deletion mutations in MutationState.




> Partial results for mutations
> -
>
> Key: PHOENIX-900
> URL: https://issues.apache.org/jira/browse/PHOENIX-900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Eli Levine
>Assignee: Eli Levine
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-900.patch
>
>
> HBase provides a way to retrieve partial results of a batch operation: 
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
> Chatted with James about this offline:
> Yes, this could be included in the CommitExcepti

[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506056#comment-14506056
 ] 

Samarth Jain commented on PHOENIX-1682:
---

Offhand I can't seem to think of any backward compatibility issues here. 
Probably there isn't any? If so, +1 for 4.3.

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1682.
---
   Resolution: Fixed
Fix Version/s: 4.3.2
   4.4.0
   5.0.0

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1906) With large number of guideposts, queries that iterate over a small range gets starved when running concurrently with larger queries

2015-04-21 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-1906:
---

 Summary: With large number of guideposts, queries that iterate 
over a small range gets starved when running concurrently with larger queries
 Key: PHOENIX-1906
 URL: https://issues.apache.org/jira/browse/PHOENIX-1906
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan


Consider the scenario with a single region server. Table has 500 guide posts 
(data is large enough that it won't fit either into HBase block or OS page 
cache so it gets blocked on disk I/O during scans) and running the following 2 
queries concurrently with and without stats enabled:

{code}select count(*) from table{code}
{code}select * from table limit 10{code}

With stats *disabled*, average time for these two queries is 100sec and 100ms 
respectively. However with stats long running count aggregate query time drops 
to 8 second but limit query time increases to 3 seconds. Degradation in limit 
query time is even more evident when concurrency level is further increased.

[~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1899:
--
Attachment: PHOENIX-1899.patch

Attached patch that creates an iterator that calls scanner.next() in the 
constructor. This helps us in doing parallel I/O for fetching the first batch. 
[~jamestaylor] - please review when you get a chance. Tests passed locally for 
me. 

> Performance regression for non-aggregate, unordered queries returning 0 or 
> few records
> --
>
> Key: PHOENIX-1899
> URL: https://issues.apache.org/jira/browse/PHOENIX-1899
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-1899.patch
>
>
> Details on here:
> Apache Phoenix Performance
> Suite: standard
> Comparison: 
> v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
> Results: http://phoenix-bin.github.io/client/performance/latest.htm
> History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1769) Exception thrown when TO_DATE function used as LHS in WHERE-clause

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506082#comment-14506082
 ] 

James Taylor commented on PHOENIX-1769:
---

[~tdsilva] - would you mind reviewing and committing the unit tests and closing 
this as CNR?

> Exception thrown when TO_DATE function used as LHS in WHERE-clause
> --
>
> Key: PHOENIX-1769
> URL: https://issues.apache.org/jira/browse/PHOENIX-1769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Sanghyun Yun
>Assignee: Soumen Bandyopadhyay
>  Labels: Newbie
> Attachments: PHOENIX-1769.patch
>
>
> I want to compare DATE type that converted from VARCHAR type using TO_DATE().
> Query :
> {quote}
> select BIRTH from yunsh where TO_DATE(BIRTH) > TO_DATE('2001-01-01');
> {quote}
> BIRTH field is VARCHAR(50) type.
> But,  it thrown exception below:
> {quote}
> Mon Mar 23 15:37:53 KST 2015, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@4ef6823, 
> java.io.IOException: java.io.IOException: 
> java.lang.reflect.InvocationTargetException
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1362)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:917)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3078)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1360)
> ... 8 more
> Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
> java.io.IOException: java.lang.reflect.InvocationTargetException
> at 
> org.apache.hadoop.hbase.filter.FilterList.parseFrom(FilterList.java:406)
> ... 12 more
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1362)
> at 
> org.apache.hadoop.hbase.filter.FilterList.parseFrom(FilterList.java:403)
> ... 12 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1360)
> ... 13 more
> Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: BooleanExpressionFilter failed 
> during reading: Could not initialize class 
> org.apache.phoenix.util.DateUtil$ISODateFormatParser
> at 
> org.apache.phoenix.filter.SingleCQKeyValueComparisonFilter.parseFrom(SingleCQKeyValueComparisonFilter.java:55)
> ... 17 more
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> BooleanExpressionFilter failed during reading: Could not initialize class 
> org.apache.phoenix.util.DateUtil$ISODateFormatParser
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
> at 
> org.apache.phoenix.filter.BooleanExpressionFilter.readFields(BooleanExpressionFilter.java:108)
> at 
> org.apache.phoenix.filter.SingleKeyValueComparisonFilter.readFields(SingleKeyValueComparisonFilter.java:136)
> at 
> org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:131)
> at 
> org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:101)
> at 
> org.apache.phoenix.filter.SingleCQKeyValueComparisonFilter.parseFrom(SingleCQKeyValueComparisonFilter.java:53)
> ... 17 more
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.util.DateUtil$ISODateFormatParser
> at 
> org.apache.phoenix.util.DateUtil$ISODateFormatParserFactory.getParser(Date

[jira] [Resolved] (PHOENIX-1707) Compare (where) DATE<> UNSIGNED_DATE raise NPE

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1707.
---
Resolution: Cannot Reproduce

> Compare (where) DATE<> UNSIGNED_DATE raise NPE
> --
>
> Key: PHOENIX-1707
> URL: https://issues.apache.org/jira/browse/PHOENIX-1707
> Project: Phoenix
>  Issue Type: Bug
> Environment: Phoenix 4.3, hbase 0.98.6
>Reporter: Lavrenty Eskin
>Assignee: Abhishek Sreenivasa
>  Labels: Newbie
>
> select
> FLOOR(REGDATE, 'DAY'),<<== UNSIGNED_DATE
>TO_DATE('2015-03-06', '-MM-dd')  <<== DATE
> from
> "ALARMSHISTORY" as "ALARMSHISTORY"
> where
>  FLOOR(REGDATE, 'DAY') = TO_DATE('2015-03-06', '-MM-dd'); <<== raise 
> java.lang.NullPointerException
> (!) There are no NULLs in REGDATE
> Ways to resolve:
> a) Implement TO_UNSIGNED_DATE function
> b) Implement correct compare function DATE<> UNSIGNED_DATE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506069#comment-14506069
 ] 

Eli Levine commented on PHOENIX-1682:
-

+1 for getting this into 4.3

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506097#comment-14506097
 ] 

James Taylor commented on PHOENIX-900:
--

[~elilevine] - thanks for the fix. Is the bug you fixed an existing bug, or 
just a bug in your pull request?

> Partial results for mutations
> -
>
> Key: PHOENIX-900
> URL: https://issues.apache.org/jira/browse/PHOENIX-900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Eli Levine
>Assignee: Eli Levine
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-900.patch
>
>
> HBase provides a way to retrieve partial results of a batch operation: 
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
> Chatted with James about this offline:
> Yes, this could be included in the CommitException we throw 
> (MutationState:412). We already include the batches that have been 
> successfully committed to the HBase server in this exception. Would you be up 
> for adding this additional information? You'd want to surface this in a 
> Phoenix-y way in a method on CommitException, something like this: ResultSet 
> getPartialCommits(). You can easily create an in memory ResultSet using 
> MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
> this (just create a new empty PhoenixStatement with the PhoenixConnection for 
> the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1906) With large number of guideposts, queries that iterate over a small range gets starved when running concurrently with larger queries

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506100#comment-14506100
 ] 

James Taylor commented on PHOENIX-1906:
---

[~ram_krish] - this is *exactly* the scenario that gets fixed with a round 
robin executor.

> With large number of guideposts, queries that iterate over a small range gets 
> starved when running concurrently with larger queries
> ---
>
> Key: PHOENIX-1906
> URL: https://issues.apache.org/jira/browse/PHOENIX-1906
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>
> Consider the scenario with a single region server. Table has 500 guide posts 
> (data is large enough that it won't fit either into HBase block or OS page 
> cache so it gets blocked on disk I/O during scans) and running the following 
> 2 queries concurrently with and without stats enabled:
> {code}select count(*) from table{code}
> {code}select * from table limit 10{code}
> With stats *disabled*, average time for these two queries is 100sec and 100ms 
> respectively. However with stats long running count aggregate query time 
> drops to 8 second but limit query time increases to 3 seconds. Degradation in 
> limit query time is even more evident when concurrency level is further 
> increased.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-1682:
---

Did you run the unit tests before submitting your patch, [~ivanweiss]? Looks 
like we're getting the following failure. Would you mind submitting a patch for 
this?
{code}
FAILURE! - in org.apache.phoenix.end2end.QueryMoreIT
testQueryMore1(org.apache.phoenix.end2end.QueryMoreIT)  Time elapsed: 3.45 sec  
<<< ERROR!
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName="HISTORY_TABLE_00Dxtenant1"
at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:313)
at 
org.apache.phoenix.util.PhoenixRuntime.encodeValues(PhoenixRuntime.java:839)
at 
org.apache.phoenix.end2end.QueryMoreIT.getRecordsOutofCursorTable(QueryMoreIT.java:268)
at 
org.apache.phoenix.end2end.QueryMoreIT.testQueryMore(QueryMoreIT.java:142)
at 
org.apache.phoenix.end2end.QueryMoreIT.testQueryMore1(QueryMoreIT.java:50)

testQueryMore4(org.apache.phoenix.end2end.QueryMoreIT)  Time elapsed: 0.38 sec  
<<< ERROR!
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName="HISTORY_TABLE_00Dxtenant1"
at 
org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:313)
at 
org.apache.phoenix.util.PhoenixRuntime.encodeValues(PhoenixRuntime.java:839)
at 
org.apache.phoenix.end2end.QueryMoreIT.getRecordsOutofCursorTable(QueryMoreIT.java:268)
at 
org.apache.phoenix.end2end.QueryMoreIT.testQueryMore(QueryMoreIT.java:142)
at 
org.apache.phoenix.end2end.QueryMoreIT.testQueryMore4(QueryMoreIT.java:68)
{code}

[~samarthjain] - will this impact the query more in production, or would these 
generated table names not require normalization?

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1899:
--
Attachment: PHOENIX-1899.patch

Thinking about this more, this patch will end up creating scanners and calling 
next() on them even for cases when it was decided by phoenix that the query 
should run serially. Altered the patch to do this:

{code}
if (isSerial(context, table, orderBy, limit, allowPageFilter)) {
 return ParallelIteratorFactory.NOOP_FACTORY;
}
if ( ScanUtil.isRoundRobinPossible(orderBy, context)) {
return ParallelIteratorFactory.FETCH_ON_CREATE_FACTORY;
}
{code}

> Performance regression for non-aggregate, unordered queries returning 0 or 
> few records
> --
>
> Key: PHOENIX-1899
> URL: https://issues.apache.org/jira/browse/PHOENIX-1899
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-1899.patch
>
>
> Details on here:
> Apache Phoenix Performance
> Suite: standard
> Comparison: 
> v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
> Results: http://phoenix-bin.github.io/client/performance/latest.htm
> History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1899:
--
Attachment: (was: PHOENIX-1899.patch)

> Performance regression for non-aggregate, unordered queries returning 0 or 
> few records
> --
>
> Key: PHOENIX-1899
> URL: https://issues.apache.org/jira/browse/PHOENIX-1899
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>
> Details on here:
> Apache Phoenix Performance
> Suite: standard
> Comparison: 
> v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
> Results: http://phoenix-bin.github.io/client/performance/latest.htm
> History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread Eli Levine (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506141#comment-14506141
 ] 

Eli Levine commented on PHOENIX-900:


It was a bug in my original PR that caused a bunch of test failures. Thanks for 
taking a look! Will commit to master and 4.x-HBase-0.98 now.

> Partial results for mutations
> -
>
> Key: PHOENIX-900
> URL: https://issues.apache.org/jira/browse/PHOENIX-900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Eli Levine
>Assignee: Eli Levine
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-900.patch
>
>
> HBase provides a way to retrieve partial results of a batch operation: 
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
> Chatted with James about this offline:
> Yes, this could be included in the CommitException we throw 
> (MutationState:412). We already include the batches that have been 
> successfully committed to the HBase server in this exception. Would you be up 
> for adding this additional information? You'd want to surface this in a 
> Phoenix-y way in a method on CommitException, something like this: ResultSet 
> getPartialCommits(). You can easily create an in memory ResultSet using 
> MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
> this (just create a new empty PhoenixStatement with the PhoenixConnection for 
> the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506145#comment-14506145
 ] 

James Taylor commented on PHOENIX-900:
--

+1. Thanks, Eli.


> Partial results for mutations
> -
>
> Key: PHOENIX-900
> URL: https://issues.apache.org/jira/browse/PHOENIX-900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Eli Levine
>Assignee: Eli Levine
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-900.patch
>
>
> HBase provides a way to retrieve partial results of a batch operation: 
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
> Chatted with James about this offline:
> Yes, this could be included in the CommitException we throw 
> (MutationState:412). We already include the batches that have been 
> successfully committed to the HBase server in this exception. Would you be up 
> for adding this additional information? You'd want to surface this in a 
> Phoenix-y way in a method on CommitException, something like this: ResultSet 
> getPartialCommits(). You can easily create an in memory ResultSet using 
> MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
> this (just create a new empty PhoenixStatement with the PhoenixConnection for 
> the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1899:
--
Attachment: PHOENIX-1899_v2.patch

How about just adding a call to peek() before returning from call() method like 
in this patch?

> Performance regression for non-aggregate, unordered queries returning 0 or 
> few records
> --
>
> Key: PHOENIX-1899
> URL: https://issues.apache.org/jira/browse/PHOENIX-1899
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-1899.patch, PHOENIX-1899_v2.patch
>
>
> Details on here:
> Apache Phoenix Performance
> Suite: standard
> Comparison: 
> v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
> Results: http://phoenix-bin.github.io/client/performance/latest.htm
> History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1903) bin scripts locate wrong client jar

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506210#comment-14506210
 ] 

Hudson commented on PHOENIX-1903:
-

FAILURE: Integrated in Phoenix-master #702 (See 
[https://builds.apache.org/job/Phoenix-master/702/])
PHOENIX-1903 bin scripts locate wrong client jar (ndimiduk: rev 
75d073025772c7642a3b27379acfcc24d8e92407)
* bin/phoenix_utils.py


> bin scripts locate wrong client jar
> ---
>
> Key: PHOENIX-1903
> URL: https://issues.apache.org/jira/browse/PHOENIX-1903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 5.0.0, 4.4.0
>
> Attachments: 1903.patch
>
>
> After PHOENIX-971, the bin scripts like sqlline.py will sometimes produce a 
> classpath containing thin-client.jar instead of client.jar. Sometimes, 
> because os.walk makes no ordering guarantees and the implementation appears 
> os-dependent. Fix this so matching works consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1873) Fix compilation errors in Pherf

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506209#comment-14506209
 ] 

Hudson commented on PHOENIX-1873:
-

FAILURE: Integrated in Phoenix-master #702 (See 
[https://builds.apache.org/job/Phoenix-master/702/])
PHOENIX-1873 Fix compilation errors in Pherf (Cody Marcel, James Taylor) 
(jtaylor: rev 572fa3c65b1ffbb39741079059d5c94b3bb7904b)
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/DataTypeMapping.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/ResultUtil.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/jmx/monitors/ExampleMonitor.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/QueryExecutor.java
* 
phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ConfigurationParserTest.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/QuerySet.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/DataLoadThreadTime.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/exception/FileLoaderRuntimeException.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/RunTime.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/DataLoadTimeSummary.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/QuerySetResult.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/loaddata/DataLoader.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/Result.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/ResourceList.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/ScenarioResult.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/ThreadTime.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/CSVResultHandler.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/DataModel.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/QueryResult.java
* phoenix-pherf/src/test/java/org/apache/phoenix/pherf/DataLoaderTest.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Scenario.java
* phoenix-pherf/src/test/java/org/apache/phoenix/pherf/BaseTestWithCluster.java
* phoenix-pherf/src/main/java/org/apache/phoenix/pherf/jmx/MonitorManager.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/DataModelResult.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/XMLConfigParser.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/DataOverride.java
* 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/exception/FileLoaderException.java


> Fix compilation errors in Pherf
> ---
>
> Key: PHOENIX-1873
> URL: https://issues.apache.org/jira/browse/PHOENIX-1873
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Cody Marcel
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-1873.patch, PHOENIX-1873_v2.patch
>
>
> Please fix the compilation errors in Pherf. FYI, the default settings for 
> Eclipse can be found in dev/eclipse_prefs_phoenix.epf as described here: 
> http://phoenix.apache.org/contributing.html#Code_conventions
> {code}
> The method writeXML() from the type ConfigurationParserTest is never used 
> locally ConfigurationParserTest.java
> /pherf/src/test/java/org/apache/phoenix/pherf   line 141Java Problem
> The value of the field DataLoader.properties is not used  DataLoader.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/loaddata  line 61 Java Problem
> The value of the field DataLoaderTest.loader is not used  
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 34 Java Problem
> The value of the field DataLoaderTest.model is not used   
> DataLoaderTest.java /pherf/src/test/java/org/apache/phoenix/pherf   line 
> 33 Java Problem
> The value of the field QueryExecutor.resultUtil is not used   
> QueryExecutor.java  
> /pherf/src/main/java/org/apache/phoenix/pherf/workload  line 47 Java Problem
> The value of the field Result.type is not usedResult.java 
> /pherf/src/main/java/org/apache/phoenix/pherf/resultline 30 Java Problem
> Type String[] of the last argument to method printRecord(Object...) doesn't 
> exactly match the vararg parameter type. Cast to Object[] to confirm the 
> non-varargs invocation, or pass individual arguments of type Object for a 
> varargs invocation.CSVResultHandler.java   
> /pherf/src/main/java/org/apache/phoenix/pherf/result/impl   line 126  
>   Java Problem
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506284#comment-14506284
 ] 

Samarth Jain commented on PHOENIX-1899:
---

Doh! Why didn't I think of that? +1. I will go ahead and commit your version.

> Performance regression for non-aggregate, unordered queries returning 0 or 
> few records
> --
>
> Key: PHOENIX-1899
> URL: https://issues.apache.org/jira/browse/PHOENIX-1899
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-1899.patch, PHOENIX-1899_v2.patch
>
>
> Details on here:
> Apache Phoenix Performance
> Suite: standard
> Comparison: 
> v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
> Results: http://phoenix-bin.github.io/client/performance/latest.htm
> History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506338#comment-14506338
 ] 

Hudson commented on PHOENIX-900:


FAILURE: Integrated in Phoenix-master #703 (See 
[https://builds.apache.org/job/Phoenix-master/703/])
PHOENIX-900 Partial results for mutations (elilevine: rev 
67c4c4597c9154f19f977e0747a5258daa766486)
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/CommitException.java
* phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
* phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
* 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixPreparedStatement.java


> Partial results for mutations
> -
>
> Key: PHOENIX-900
> URL: https://issues.apache.org/jira/browse/PHOENIX-900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Eli Levine
>Assignee: Eli Levine
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-900.patch
>
>
> HBase provides a way to retrieve partial results of a batch operation: 
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
> Chatted with James about this offline:
> Yes, this could be included in the CommitException we throw 
> (MutationState:412). We already include the batches that have been 
> successfully committed to the HBase server in this exception. Would you be up 
> for adding this additional information? You'd want to surface this in a 
> Phoenix-y way in a method on CommitException, something like this: ResultSet 
> getPartialCommits(). You can easily create an in memory ResultSet using 
> MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
> this (just create a new empty PhoenixStatement with the PhoenixConnection for 
> the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506337#comment-14506337
 ] 

Hudson commented on PHOENIX-1682:
-

FAILURE: Integrated in Phoenix-master #703 (See 
[https://builds.apache.org/job/Phoenix-master/703/])
PHOENIX-1682 PhoenixRuntime.getTable() does not work with case-sensitive table 
names (Ivan Weiss) (jtaylor: rev ed7d0e978ecc46dec7fd2ae1eb99a32de9c8c32a)
* phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
* phoenix-core/src/test/java/org/apache/phoenix/util/PhoenixRuntimeTest.java


> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-787) CEIL function may produce incorrect results for TIMESTAMP

2015-04-21 Thread Naveen Madhire (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506353#comment-14506353
 ] 

Naveen Madhire commented on PHOENIX-787:


[~samarthjain] I tested this one and both ROUND and CEIL is returning 1000. I 
am working on the REPEAT function now, will get to this once I am done there.

Thanks.

> CEIL function may produce incorrect results for TIMESTAMP
> -
>
> Key: PHOENIX-787
> URL: https://issues.apache.org/jira/browse/PHOENIX-787
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>  Labels: Newbie
>
> In the CEIL function, we only consider nanos when the time unit of 
> MILLISECONDS is used. However, we should consider it for other time units as 
> well. For example, if the time unit is SECONDS and the TIMESTAMP value 
> happens to be at an exact multiple of 1000 milliseconds, then the CEIL should 
> round up, as their will be nano seconds remaining and thus the TIMESTAMP 
> should be rounded up to the next increment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1906) With large number of guideposts, queries that iterate over a small range gets starved when running concurrently with larger queries

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506358#comment-14506358
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1906:
-

[~giacomotaylor]
I understand the real case here.  It is probably better to highlight this in 
the HBase community through that JIRA. Let me do that.  I have updated a patch 
there in HBASE-12790 after testing in a real cluster environment.  Some 
interesting bug was found and fixed.
As I said the only problem that I face is to write a real time test case, which 
I am figuring out a way probably that would worm.

> With large number of guideposts, queries that iterate over a small range gets 
> starved when running concurrently with larger queries
> ---
>
> Key: PHOENIX-1906
> URL: https://issues.apache.org/jira/browse/PHOENIX-1906
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>
> Consider the scenario with a single region server. Table has 500 guide posts 
> (data is large enough that it won't fit either into HBase block or OS page 
> cache so it gets blocked on disk I/O during scans) and running the following 
> 2 queries concurrently with and without stats enabled:
> {code}select count(*) from table{code}
> {code}select * from table limit 10{code}
> With stats *disabled*, average time for these two queries is 100sec and 100ms 
> respectively. However with stats long running count aggregate query time 
> drops to 8 second but limit query time increases to 3 seconds. Degradation in 
> limit query time is even more evident when concurrency level is further 
> increased.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1769) Exception thrown when TO_DATE function used as LHS in WHERE-clause

2015-04-21 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506416#comment-14506416
 ] 

Thomas D'Silva commented on PHOENIX-1769:
-

Sure, I will commit this tomorrow.

> Exception thrown when TO_DATE function used as LHS in WHERE-clause
> --
>
> Key: PHOENIX-1769
> URL: https://issues.apache.org/jira/browse/PHOENIX-1769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Sanghyun Yun
>Assignee: Soumen Bandyopadhyay
>  Labels: Newbie
> Attachments: PHOENIX-1769.patch
>
>
> I want to compare DATE type that converted from VARCHAR type using TO_DATE().
> Query :
> {quote}
> select BIRTH from yunsh where TO_DATE(BIRTH) > TO_DATE('2001-01-01');
> {quote}
> BIRTH field is VARCHAR(50) type.
> But,  it thrown exception below:
> {quote}
> Mon Mar 23 15:37:53 KST 2015, 
> org.apache.hadoop.hbase.client.RpcRetryingCaller@4ef6823, 
> java.io.IOException: java.io.IOException: 
> java.lang.reflect.InvocationTargetException
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1362)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:917)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3078)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedMethodAccessor110.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1360)
> ... 8 more
> Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
> java.io.IOException: java.lang.reflect.InvocationTargetException
> at 
> org.apache.hadoop.hbase.filter.FilterList.parseFrom(FilterList.java:406)
> ... 12 more
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1362)
> at 
> org.apache.hadoop.hbase.filter.FilterList.parseFrom(FilterList.java:403)
> ... 12 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1360)
> ... 13 more
> Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: BooleanExpressionFilter failed 
> during reading: Could not initialize class 
> org.apache.phoenix.util.DateUtil$ISODateFormatParser
> at 
> org.apache.phoenix.filter.SingleCQKeyValueComparisonFilter.parseFrom(SingleCQKeyValueComparisonFilter.java:55)
> ... 17 more
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> BooleanExpressionFilter failed during reading: Could not initialize class 
> org.apache.phoenix.util.DateUtil$ISODateFormatParser
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
> at 
> org.apache.phoenix.filter.BooleanExpressionFilter.readFields(BooleanExpressionFilter.java:108)
> at 
> org.apache.phoenix.filter.SingleKeyValueComparisonFilter.readFields(SingleKeyValueComparisonFilter.java:136)
> at 
> org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:131)
> at 
> org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:101)
> at 
> org.apache.phoenix.filter.SingleCQKeyValueComparisonFilter.parseFrom(SingleCQKeyValueComparisonFilter.java:53)
> ... 17 more
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.util.DateUtil$ISODateFormatParser
> at 
> org.apache.phoenix.util.DateUtil$ISODateFormatParserFactory.getParser(DateUtil.java:242)
> at 
> org.apache.phoenix.util.

[jira] [Commented] (PHOENIX-1706) Create skeleton for parsing DDL

2015-04-21 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506417#comment-14506417
 ] 

Julian Hyde commented on PHOENIX-1706:
--

I have a basic implementation in 
https://github.com/julianhyde/phoenix/tree/1706-ddl-skeleton.

It requires the fix for CALCITE-691 (currently only in 
https://github.com/julianhyde/phoenix/tree/1706-ddl-skeleton) and hence 
calcite-1.3.0-incubating-SNAPSHOT.

The modified parser can parse a COMMIT statement to create a SqlCommit parse 
tree node (a new class in package org.apache.phoenix.calcite.parse). If you 
don't want a COMMIT, feel free to adapt the process for other kinds of DDL.

It currently throws an UnsupportedOperationException during validation, and 
there is no way to execute. We should discuss how to extend the validator (or 
create a separate DDL validator) and how to execute.

> Create skeleton for parsing DDL
> ---
>
> Key: PHOENIX-1706
> URL: https://issues.apache.org/jira/browse/PHOENIX-1706
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Julian Hyde
>
> Phoenix would like to leverage the Calcite parser, so would like to have the 
> ability to parse the following DDL statements. The current work for this is 
> occurring in the calcite branch.
> CREATE TABLE: http://phoenix.apache.org/language/index.html#create_table
> CREATE VIEW: http://phoenix.apache.org/language/index.html#create_view
> CREATE INDEX: http://phoenix.apache.org/language/index.html#create_index
> CREATE SEQUENCE: http://phoenix.apache.org/language/index.html#create_sequence
> ALTER TABLE/VIEW: http://phoenix.apache.org/language/index.html#alter
> ALTER INDEX: http://phoenix.apache.org/language/index.html#alter_index
> DROP TABLE: http://phoenix.apache.org/language/index.html#drop_table
> DROP VIEW: http://phoenix.apache.org/language/index.html#drop_view
> DROP INDEX: http://phoenix.apache.org/language/index.html#drop_index
> DROP SEQUENCE: http://phoenix.apache.org/language/index.html#drop_sequence
> UPDATE STATISTICS: 
> http://phoenix.apache.org/language/index.html#update_statistics
> TRACE ON/OFF [WITH SAMPLING ] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-21 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506420#comment-14506420
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi All,
Thanks Nick and Stack for your valuable sharing of knowledge. That helped me 
view the problem in different directions. Yes a web based tool will be fine. 
Therefore I plan to create a light weight, less dependency, web UI for exposing 
trace info. In this task I will be communicating with HTrace dev and Zipkin 
community and get their ideas in making a better product. Influencing the 
htrace UI implementation is also a great idea. I will communicate with Zipkin 
dev to get a good idea on the code base and features so that I would be able to 
include some of it in my product.

The advance search mock up is a good place to start the project and I'll start 
with it's implementation. 

Best Regards.

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
> Attachments: MockUp1-TimeSlider.png, MockUp2-AdvanceSearch.png, 
> MockUp3-PatternDetector.png, MockUp4-FlameGraph.png
>
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506474#comment-14506474
 ] 

Samarth Jain commented on PHOENIX-1682:
---

This could potentially impact the query more implementation we have in SFDC. 
[~elilevine], could you verify if the tenant specific view name is already 
normalized? If yes, then things should work as it is.

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1899) Performance regression for non-aggregate, unordered queries returning 0 or few records

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506482#comment-14506482
 ] 

Hudson commented on PHOENIX-1899:
-

FAILURE: Integrated in Phoenix-master #704 (See 
[https://builds.apache.org/job/Phoenix-master/704/])
PHOENIX-1899 Performance regression for non-aggregate, unordered queries 
returning 0 or few records (samarth.jain: rev 
f54abb55c71f18c1e665e1a3fa6e3330c5619e2e)
* phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java


> Performance regression for non-aggregate, unordered queries returning 0 or 
> few records
> --
>
> Key: PHOENIX-1899
> URL: https://issues.apache.org/jira/browse/PHOENIX-1899
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-1899.patch, PHOENIX-1899_v2.patch
>
>
> Details on here:
> Apache Phoenix Performance
> Suite: standard
> Comparison: 
> v4.1.0,0.98.7-hadoop2;v4.2.2,0.98.7-hadoop2;v4.3.0,0.98.7-hadoop2;4.x-HBase-0.98,0.98.7-hadoop2
> Results: http://phoenix-bin.github.io/client/performance/latest.htm
> History: http://phoenix-bin.github.io/client/performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-900) Partial results for mutations

2015-04-21 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506486#comment-14506486
 ] 

Samarth Jain commented on PHOENIX-900:
--

[~elilevine] - looks like the test you added is failing. Can you please check? 

https://builds.apache.org/job/Phoenix-master/703/console

Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.799 sec <<< 
FAILURE! - in org.apache.phoenix.execute.PartialCommitIT
testDeleteFailure(org.apache.phoenix.execute.PartialCommitIT)  Time elapsed: 
0.015 sec  <<< FAILURE!
java.lang.AssertionError: Expected at least one statement in the list to fail
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.phoenix.execute.PartialCommitIT.testPartialCommit(PartialCommitIT.java:228)
at 
org.apache.phoenix.execute.PartialCommitIT.testDeleteFailure(PartialCommitIT.java:177)


> Partial results for mutations
> -
>
> Key: PHOENIX-900
> URL: https://issues.apache.org/jira/browse/PHOENIX-900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Eli Levine
>Assignee: Eli Levine
> Fix For: 5.0.0, 4.4.0
>
> Attachments: PHOENIX-900.patch
>
>
> HBase provides a way to retrieve partial results of a batch operation: 
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch%28java.util.List,%20java.lang.Object[]%29
> Chatted with James about this offline:
> Yes, this could be included in the CommitException we throw 
> (MutationState:412). We already include the batches that have been 
> successfully committed to the HBase server in this exception. Would you be up 
> for adding this additional information? You'd want to surface this in a 
> Phoenix-y way in a method on CommitException, something like this: ResultSet 
> getPartialCommits(). You can easily create an in memory ResultSet using 
> MaterializedResultIterator plus the PhoenixResultSet constructor that accepts 
> this (just create a new empty PhoenixStatement with the PhoenixConnection for 
> the other arg).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506497#comment-14506497
 ] 

James Taylor commented on PHOENIX-1682:
---

The view name would have already been normalized (at least by Phoenix), but 
[~elilevine] can confirm. Do you append the ORG_ID as in that test, 
[~samarthjain] and if so would it ever have lower case characters?

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1682) PhoenixRuntime.getTable() does not work with case-sensitive table names

2015-04-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506497#comment-14506497
 ] 

James Taylor edited comment on PHOENIX-1682 at 4/22/15 6:12 AM:


The view name would have already been normalized (at least by Phoenix), but 
[~elilevine] can confirm. Do you append the ORG_ID as in that test, 
[~samarthjain] and if so would it ever have lower case characters? If need be, 
we can always normalize in the call in Pliny.


was (Author: jamestaylor):
The view name would have already been normalized (at least by Phoenix), but 
[~elilevine] can confirm. Do you append the ORG_ID as in that test, 
[~samarthjain] and if so would it ever have lower case characters?

> PhoenixRuntime.getTable() does not work with case-sensitive table names
> ---
>
> Key: PHOENIX-1682
> URL: https://issues.apache.org/jira/browse/PHOENIX-1682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Eli Levine
>Assignee: Ivan Weiss
>  Labels: Newbie
> Fix For: 5.0.0, 4.4.0, 4.3.2
>
>
> PhoenixRuntime.getTable(conn, name) assumes _name_ is a single component 
> because it calls SchemaUtil.normalizeIdentifier(name) on the whole thing, 
> without breaking up _name_ into table name and schema name components. In 
> cases where a table is case sensitive (created with _schemaName."tableName"_) 
> this will result in getTable not finding the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1906) With large number of guideposts, queries that iterate over a small range gets starved when running concurrently with larger queries

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506358#comment-14506358
 ] 

ramkrishna.s.vasudevan edited comment on PHOENIX-1906 at 4/22/15 6:24 AM:
--

[~giacomotaylor]
I understand the real case here.  It is probably better to highlight this in 
the HBase community through that JIRA. Let me do that.  I have updated a patch 
there in HBASE-12790 after testing in a real cluster environment.  Some 
interesting bug was found and fixed.
As I said the only problem that I face is to write a real time test case, which 
I am figuring out a way probably that would work.


was (Author: ram_krish):
[~giacomotaylor]
I understand the real case here.  It is probably better to highlight this in 
the HBase community through that JIRA. Let me do that.  I have updated a patch 
there in HBASE-12790 after testing in a real cluster environment.  Some 
interesting bug was found and fixed.
As I said the only problem that I face is to write a real time test case, which 
I am figuring out a way probably that would worm.

> With large number of guideposts, queries that iterate over a small range gets 
> starved when running concurrently with larger queries
> ---
>
> Key: PHOENIX-1906
> URL: https://issues.apache.org/jira/browse/PHOENIX-1906
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>
> Consider the scenario with a single region server. Table has 500 guide posts 
> (data is large enough that it won't fit either into HBase block or OS page 
> cache so it gets blocked on disk I/O during scans) and running the following 
> 2 queries concurrently with and without stats enabled:
> {code}select count(*) from table{code}
> {code}select * from table limit 10{code}
> With stats *disabled*, average time for these two queries is 100sec and 100ms 
> respectively. However with stats long running count aggregate query time 
> drops to 8 second but limit query time increases to 3 seconds. Degradation in 
> limit query time is even more evident when concurrency level is further 
> increased.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)