Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Gabriel Reid
Congrats and welcome Thomas!

- Gabriel

On Mon, Feb 9, 2015 at 10:35 PM, James Taylor  wrote:
> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Thomas D'Silva has been added as a committer to the Apache Phoenix
> project. He's been a steady contributor over the last nine months,
> most recently adding support for functional indexes[1] which will
> allow indexes to be used in all kinds of new, interesting scenarios.
>
> Great job, Thomas. Looking forward to many more contributions!
>
> Regards,
> James
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-514


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-02-09 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313461#comment-14313461
 ] 

Maryann Xue commented on PHOENIX-1580:
--

Agree with you on idea of treating all statements the same way, [~jamestaylor]. 
But I think the logic should be put int QueryCompiler.compile() and maybe 
compileUnionQuery() can call QueryCompiler.compileSubquery() for each 
statement, wrap the inner plans with UnionPlan. That way, we can handle all 
kinds of statements (including joins, derived tables) without caring much of 
what types they actually are.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1646) Views and functional index expressions may lose information when stringified

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313450#comment-14313450
 ] 

Hudson commented on PHOENIX-1646:
-

FAILURE: Integrated in Phoenix-master #585 (See 
[https://builds.apache.org/job/Phoenix-master/585/])
PHOENIX-1646 Views and functional index expressions may lose information when 
stringified (jtaylor: rev abeaa74ad35e145fcae40f239437e1b5964bcd72)
* phoenix-core/src/main/java/org/apache/phoenix/parse/NamedTableNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ModulusParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/OuterJoinParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/CastParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/CompoundParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/BindParseNode.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/ComparisonExpression.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ColumnDef.java
* 
phoenix-core/src/main/java/org/apache/phoenix/parse/RowValueConstructorParseNode.java
* 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/AliasedNode.java
* phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/parse/DistinctCountParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/AndParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java
* 
phoenix-core/src/main/java/org/apache/phoenix/parse/ArrayAllComparisonNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/StringConcatParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/DerivedTableNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/CaseParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/MultiplyParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/SelectStatement.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/JoinTableNode.java
* 
phoenix-core/src/main/java/org/apache/phoenix/parse/FamilyWildcardParseNode.java
* 
phoenix-core/src/test/java/org/apache/phoenix/query/BaseConnectionlessQueryTest.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ArrayElemRefNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/AddParseNode.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ConcreteTableNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/IsNullParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ArrayConstructorNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/DivideParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/InListParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/WildcardParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/OrderByNode.java
* phoenix-core/src/main/java/org/apache/phoenix/util/StringUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ColumnParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/BetweenParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/util/QueryUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/LikeParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/LiteralParseNode.java
* phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ArithmeticParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/NotParseNode.java
* 
phoenix-core/src/main/java/org/apache/phoenix/parse/AggregateFunctionWithinGroupParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/SubtractParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/NamedParseNode.java
* phoenix-core/src/test/java/org/apache/phoenix/schema/types/PDataTypeTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/parse/ArrayAnyComparisonNode.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDate.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/LimitNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/NamedNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/SubqueryParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/TableNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/InParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/TableName.java
* 
phoenix-core/src/ma

[jira] [Commented] (PHOENIX-688) Add to_time and to_timestamp built-in functions

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313449#comment-14313449
 ] 

Hudson commented on PHOENIX-688:


FAILURE: Integrated in Phoenix-master #585 (See 
[https://builds.apache.org/job/Phoenix-master/585/])
PHOENIX-688 Add to_time and to_timestamp built-in functions (jtaylor: rev 
11a76b297fad46cd7f51019810ba4d1a7b51b418)
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTime.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ToTimeParseNode.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ToDateParseNode.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToTimestampFunction.java
* phoenix-core/src/main/java/org/apache/phoenix/util/csv/CsvUpsertExecutor.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ToDateFunctionIT.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/StatementContext.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTimestamp.java
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
* phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ProductMetricsIT.java
* 
phoenix-core/src/test/java/org/apache/phoenix/expression/SortOrderExpressionTest.java
* phoenix-core/src/test/java/org/apache/phoenix/util/DateUtilTest.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDate.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/VariableLengthPKIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/TruncateFunctionIT.java
* phoenix-core/src/main/java/org/apache/phoenix/parse/ToTimestampParseNode.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToDateFunction.java
* phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
* phoenix-core/src/main/java/org/apache/phoenix/util/DateUtil.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ToTimeFunction.java
* phoenix-core/src/it/java/org/apache/phoenix/mapreduce/CsvBulkLoadToolIT.java


> Add to_time and to_timestamp built-in functions
> ---
>
> Key: PHOENIX-688
> URL: https://issues.apache.org/jira/browse/PHOENIX-688
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-688.patch, PHOENIX-688_v2.patch, 
> PHOENIX-688_v3.patch
>
>
> We already have a to_date function implemented by ToDateFunction, so adding a 
> ToTimeFunction could be done by just deriving the class from ToDateFunction 
> and changing the getDataType() to be PDataType.TIME instead.
> For a general overview on adding a new built-in function, see the phoenix 
> blog 
> [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html)
> The to_timestamp function would be similar as well, but in this case we'd 
> want to register a new ToTimestampParseNode (very similar to 
> ToDateParseNode), that uses the DateUtil.getTimestampParser(format) to create 
> the timestamp instance. This class would then be defined in the 
> ToTimestampFunction as the nodeClass attribute (which would cause it to be 
> used to construct a ToTimestampFunction at compile time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-653) Support ANSI-standard date literals from SQL 2003

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313451#comment-14313451
 ] 

Hudson commented on PHOENIX-653:


FAILURE: Integrated in Phoenix-master #585 (See 
[https://builds.apache.org/job/Phoenix-master/585/])
PHOENIX-653 Support ANSI-standard date literals from SQL 2003 (jtaylor: rev 
2d5913b80349179da5aa18a1abbb56c230ee0542)
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PChar.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDate.java
* phoenix-core/src/test/java/org/apache/phoenix/query/QueryPlanTest.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTime.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PVarbinary.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTime.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PTimestamp.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PArrayDataType.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDataType.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PBinary.java
* 
phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedTimestamp.java
* 
phoenix-core/src/test/java/org/apache/phoenix/compile/StatementHintsCompilationTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayConstructorExpression.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PVarchar.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/types/PUnsignedDate.java
* phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java
* phoenix-core/src/main/antlr3/PhoenixSQL.g
* phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNodeFactory.java


> Support ANSI-standard date literals from SQL 2003
> -
>
> Key: PHOENIX-653
> URL: https://issues.apache.org/jira/browse/PHOENIX-653
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-653.patch
>
>
> Support date literals defined as described here: 
> https://github.com/forcedotcom/phoenix/issues/512_issuecomment-27802262



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1646) Views and functional index expressions may lose information when stringified

2015-02-09 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313395#comment-14313395
 ] 

Thomas D'Silva commented on PHOENIX-1646:
-

+1 

I think the patch looks good.

minor typo in NamedParseNode.java

{code}
+protected NamedNode getNameedNode() {
+return namedNode;
+}
+
{code}

> Views and functional index expressions may lose information when stringified
> 
>
> Key: PHOENIX-1646
> URL: https://issues.apache.org/jira/browse/PHOENIX-1646
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-1646-wip.patch, PHOENIX-1646.patch, 
> PHOENIX-1646_v2.patch
>
>
> We currently produce a string from an Expression to store in the system 
> catalog for views and functional indexes. However there are a number of 
> constructs that won't roundtrip correctly, mainly due to the way expression 
> trees get collapsed during compilation. The easiest way to fix this is to go 
> from the ParseNode to a string instead and fully resolve column names in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Native JSON Type Issue problem?‏‏‏

2015-02-09 Thread James Taylor
Hi,
Glad to hear you're interested in working on this. I agree that what
we'll need is native JSON support with a specific type and binary
format, along the lines of what Postgres now has with JSONB[1]. It's
unclear to me if this needs to ripple down to the HBase level with a
custom block encoding as well.

The best we can do today is store JSON in its textual form and add
built-in functions to access and functional indexes to make it
perform. This has a lot of limitations, though, so a native JSON type
is really what we need.

Thanks,
James

[1] http://www.craigkerstiens.com/2014/03/24/Postgres-9.4-Looking-up/

On Sun, Feb 8, 2015 at 11:12 PM, Yang Andy  wrote:
>
> Hello,
>
> I am interesting at adding native JSON type to phoenix.
>
> Like http://www.postgresql.org/docs/9.4/static/datatype-json.html .
>
> Is there any related issues currently exsisting?
>
>
>
>
> I found some issues like
>
> https://issues.apache.org/jira/browse/PHOENIX-628
>
> but, the discussion about this issue is somehow defferent from native JSON 
> type.
>
> It seems treat data as JSON but actually not saving as JSON type data.
>
> And this issue seems not active for a long time.
>
>
>
>
> Appreate for your help!


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-02-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313366#comment-14313366
 ] 

James Taylor commented on PHOENIX-1580:
---

Thanks for the patch, [~ayingshu]. I think it'll be easier to model the union 
chained off of SelectStatement rather than treating these as a set of 
SelectStatements. So you'd add a SelectStatement union member variable on 
SelectStatement in this case. Otherwise, things like bind variables will be 
tricky to get right. I don't think any changes to PhoenixResultSet or 
PhoenixStatement should be necessary. Instead, in 
QueryCompiler.compileSingleQuery(), you can detect if the SelectStatement is a 
union and call a new QueryCompiler.compileUnionQuery() with a corresponding new 
UnionPlan.
- QueryCompiler.compileUnionQuery() would compile each individual 
SelectStatement, giving you back a QueryPlan for each one.
- It's here that you'd compare the number and column types (from the 
RowProjector off of each QueryPlan).
- The top level iterator from your QueryCompiler.compileUnionQuery() would be 
combined together in a ConcatResultIterator.
- Any particular reason you're not allowing LIMIT or ORDER BY? I think those 
should be allowed.
- You shouldn't need to deal with creating Callables and threading in general.
- Joins and derived queries, etc. will be interesting. [~maryannxue] can likely 
give you some advice here.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Andrew Purtell
Some unit tests in HBase show how you can install test coprocessors that do
various broken things. If you install it so it sorts below the Phoenix
coprocessors in priority then at runtime the Phoenix coprocessor code will
be called by the coprocessor framework, Phoenix code will do whatever and
hand control back to HBase by returning, the coprocessor framework will
then call the test coprocessor which will simulate failure, and I think
this is the ordering you want.


On Mon, Feb 9, 2015 at 11:01 AM, Eli Levine  wrote:

> Greetings Phoenix devs,
>
> I'm working on https://issues.apache.org/jira/browse/PHOENIX-900 (Partial
> results for mutations). In order to test this functionality properly, I
> need to write one or more tests that simulate write failures in HBase.
>
> I think this will involve having a test deploy a custom test-only
> coprocessor that will cause some predefined writes to fail, which the test
> will verify. Does that sound like the right approach? Any examples of
> similar tests in Phoenix or anywhere else in HBase-land?
>
> Thanks,
>
> Eli
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Andrew Purtell
Congratulations Thomas!


On Mon, Feb 9, 2015 at 1:35 PM, James Taylor  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Thomas D'Silva has been added as a committer to the Apache Phoenix
> project. He's been a steady contributor over the last nine months,
> most recently adding support for functional indexes[1] which will
> allow indexes to be used in all kinds of new, interesting scenarios.
>
> Great job, Thomas. Looking forward to many more contributions!
>
> Regards,
> James
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-514
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


[jira] [Updated] (PHOENIX-1646) Views and functional index expressions may lose information when stringified

2015-02-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1646:
--
Attachment: PHOENIX-1646_v2.patch

Slight tweak to original that fixes index test failure caused by using the 
wrong resolver during statement rewrite.

> Views and functional index expressions may lose information when stringified
> 
>
> Key: PHOENIX-1646
> URL: https://issues.apache.org/jira/browse/PHOENIX-1646
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-1646-wip.patch, PHOENIX-1646.patch, 
> PHOENIX-1646_v2.patch
>
>
> We currently produce a string from an Expression to store in the system 
> catalog for views and functional indexes. However there are a number of 
> constructs that won't roundtrip correctly, mainly due to the way expression 
> trees get collapsed during compilation. The easiest way to fix this is to go 
> from the ParseNode to a string instead and fully resolve column names in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1648) Extra scan being issued while doing SELECT COUNT(*) queries

2015-02-09 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-1648:
-

 Summary: Extra scan being issued while doing SELECT COUNT(*) 
queries
 Key: PHOENIX-1648
 URL: https://issues.apache.org/jira/browse/PHOENIX-1648
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.3
Reporter: Samarth Jain


On turning tracing on and executing SELECT COUNT(*) queries, I am seeing an 
extra scan being executed every time. 

CREATE TABLE MY_TABLE (ID INTEGER NOT NULL PRIMARY KEY, VALUE INTEGER) 
SALT_BUCKETS = 16

SELECT COUNT(*) FROM MY_TABLE

The trace table has:

Creating basic query for [CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER 
MY_TABLE, SERVER FILTER BY FIRST KEY ONLY, SERVER AGGREGATE INTO SINGLE 
ROW]
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Parallel scanner for table: MY_TABLE
Creating basic query for [CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER 
SYSTEM.CATALOG [null,null,'MY_TABLE',not null],SERVER FILTER BY 
COLUMN_FAMILY IS NULL]
Parallel scanner for table: SYSTEM.CATALOG


While the 16 scanners being created for MY_TABLE is expected, the extra scanner 
for SYSTEM.CATALOG isn't. This is happening consistently, so this likely isn't 
happening because of cache expiration. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-688) Add to_time and to_timestamp built-in functions

2015-02-09 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313311#comment-14313311
 ] 

Thomas D'Silva commented on PHOENIX-688:


+1 this patch also fixes a b/w compat issue between 4.3 server and 4.2.1 client.

> Add to_time and to_timestamp built-in functions
> ---
>
> Key: PHOENIX-688
> URL: https://issues.apache.org/jira/browse/PHOENIX-688
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 3.0-Release
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-688.patch, PHOENIX-688_v2.patch, 
> PHOENIX-688_v3.patch
>
>
> We already have a to_date function implemented by ToDateFunction, so adding a 
> ToTimeFunction could be done by just deriving the class from ToDateFunction 
> and changing the getDataType() to be PDataType.TIME instead.
> For a general overview on adding a new built-in function, see the phoenix 
> blog 
> [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html)
> The to_timestamp function would be similar as well, but in this case we'd 
> want to register a new ToTimestampParseNode (very similar to 
> ToDateParseNode), that uses the DateUtil.getTimestampParser(format) to create 
> the timestamp instance. This class would then be defined in the 
> ToTimestampFunction as the nodeClass attribute (which would cause it to be 
> used to construct a ToTimestampFunction at compile time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1646) Views and functional index expressions may lose information when stringified

2015-02-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1646:
--
Attachment: PHOENIX-1646.patch

[~tdsilva] - would you mind giving this a quick review? It looks bigger than it 
is. Basically, instead of going through Expression to get the string, we go 
through ParseNode (which necessitated adding the toSQL method every where). The 
reason is that otherwise we have the potential for losing information - namely 
stuff that's optimized into constants during compilation. Although it'd be 
possible to change that, it'd be more difficult and riskier that this change. 
We pass the ColumnResolver through so that we can fully qualify column 
references. We also support a null for ColumnResolver if this doesn't matter, 
but for views and functional indexes, we want to fully qualify them (in case 
they become ambiguous later).

Now, functionally you can reference date constants in functional expressions 
and views and current_date() in views. I also added tests in QueryParserTest, 
WhereOptimizerTest, and WhereCompilerTest that asserts being able to go from 
ParseNode -> String -> ParseNode without losing any information.

> Views and functional index expressions may lose information when stringified
> 
>
> Key: PHOENIX-1646
> URL: https://issues.apache.org/jira/browse/PHOENIX-1646
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-1646-wip.patch, PHOENIX-1646.patch
>
>
> We currently produce a string from an Expression to store in the system 
> catalog for views and functional indexes. However there are a number of 
> constructs that won't roundtrip correctly, mainly due to the way expression 
> trees get collapsed during compilation. The easiest way to fix this is to go 
> from the ParseNode to a string instead and fully resolve column names in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Rajeshbabu Chintaguntla
Congratulations Thomas!!

On Tue, Feb 10, 2015 at 3:23 AM, Ravi Kiran 
wrote:

> Congrats Thomas!!
>
>
> On Mon, Feb 9, 2015 at 1:41 PM, Nick Dimiduk  wrote:
>
> > Nice work Thomas!
> >
> > On Mon, Feb 9, 2015 at 1:35 PM, James Taylor 
> > wrote:
> >
> > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > > Thomas D'Silva has been added as a committer to the Apache Phoenix
> > > project. He's been a steady contributor over the last nine months,
> > > most recently adding support for functional indexes[1] which will
> > > allow indexes to be used in all kinds of new, interesting scenarios.
> > >
> > > Great job, Thomas. Looking forward to many more contributions!
> > >
> > > Regards,
> > > James
> > >
> > > [1] https://issues.apache.org/jira/browse/PHOENIX-514
> > >
> >
>


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Ravi Kiran
Congrats Thomas!!


On Mon, Feb 9, 2015 at 1:41 PM, Nick Dimiduk  wrote:

> Nice work Thomas!
>
> On Mon, Feb 9, 2015 at 1:35 PM, James Taylor 
> wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > Thomas D'Silva has been added as a committer to the Apache Phoenix
> > project. He's been a steady contributor over the last nine months,
> > most recently adding support for functional indexes[1] which will
> > allow indexes to be used in all kinds of new, interesting scenarios.
> >
> > Great job, Thomas. Looking forward to many more contributions!
> >
> > Regards,
> > James
> >
> > [1] https://issues.apache.org/jira/browse/PHOENIX-514
> >
>


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Nick Dimiduk
Nice work Thomas!

On Mon, Feb 9, 2015 at 1:35 PM, James Taylor  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Thomas D'Silva has been added as a committer to the Apache Phoenix
> project. He's been a steady contributor over the last nine months,
> most recently adding support for functional indexes[1] which will
> allow indexes to be used in all kinds of new, interesting scenarios.
>
> Great job, Thomas. Looking forward to many more contributions!
>
> Regards,
> James
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-514
>


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Jonathan Bruce
Congrats Thomas!

On Mon, Feb 9, 2015 at 1:35 PM, Ted Yu  wrote:

> Congratulations, Thomas !
>
> On Mon, Feb 9, 2015 at 1:35 PM, James Taylor 
> wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > Thomas D'Silva has been added as a committer to the Apache Phoenix
> > project. He's been a steady contributor over the last nine months,
> > most recently adding support for functional indexes[1] which will
> > allow indexes to be used in all kinds of new, interesting scenarios.
> >
> > Great job, Thomas. Looking forward to many more contributions!
> >
> > Regards,
> > James
> >
> > [1] https://issues.apache.org/jira/browse/PHOENIX-514
> >
>



-- 
Jonathan Bruce | Director Product Management | Platform Big Data
C: +1-415-806-4978 | T: @jonbruce


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Jan Fernando
Congratulations Thomas

On Mon, Feb 9, 2015 at 1:35 PM, Ted Yu  wrote:

> Congratulations, Thomas !
>
> On Mon, Feb 9, 2015 at 1:35 PM, James Taylor 
> wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > Thomas D'Silva has been added as a committer to the Apache Phoenix
> > project. He's been a steady contributor over the last nine months,
> > most recently adding support for functional indexes[1] which will
> > allow indexes to be used in all kinds of new, interesting scenarios.
> >
> > Great job, Thomas. Looking forward to many more contributions!
> >
> > Regards,
> > James
> >
> > [1] https://issues.apache.org/jira/browse/PHOENIX-514
> >
>


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Mujtaba Chohan
Congrats Thomas!

On Mon, Feb 9, 2015 at 1:35 PM, Ted Yu  wrote:

> Congratulations, Thomas !
>
> On Mon, Feb 9, 2015 at 1:35 PM, James Taylor 
> wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > Thomas D'Silva has been added as a committer to the Apache Phoenix
> > project. He's been a steady contributor over the last nine months,
> > most recently adding support for functional indexes[1] which will
> > allow indexes to be used in all kinds of new, interesting scenarios.
> >
> > Great job, Thomas. Looking forward to many more contributions!
> >
> > Regards,
> > James
> >
> > [1] https://issues.apache.org/jira/browse/PHOENIX-514
> >
>


Re: [ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread Ted Yu
Congratulations, Thomas !

On Mon, Feb 9, 2015 at 1:35 PM, James Taylor  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Thomas D'Silva has been added as a committer to the Apache Phoenix
> project. He's been a steady contributor over the last nine months,
> most recently adding support for functional indexes[1] which will
> allow indexes to be used in all kinds of new, interesting scenarios.
>
> Great job, Thomas. Looking forward to many more contributions!
>
> Regards,
> James
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-514
>


[ANNOUNCE] Thomas D'Silva added as Apache Phoenix committer

2015-02-09 Thread James Taylor
On behalf of the Apache Phoenix PMC, I'm pleased to announce that
Thomas D'Silva has been added as a committer to the Apache Phoenix
project. He's been a steady contributor over the last nine months,
most recently adding support for functional indexes[1] which will
allow indexes to be used in all kinds of new, interesting scenarios.

Great job, Thomas. Looking forward to many more contributions!

Regards,
James

[1] https://issues.apache.org/jira/browse/PHOENIX-514


Re: Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Jesse Yates
As I mentioned above, not off the top of my head :-/ but I was just using
simple region observers - all you need to do is add them to the list for
the config before cluster start up and should be good to go.

On Mon, Feb 9, 2015, 1:03 PM Eli Levine  wrote:

> Thanks, Jesse. Very useful. Any pointers to specific tests that spin up
> Coprocessors dynamically in Phoenix?
>
> On Mon, Feb 9, 2015 at 11:51 AM, Jesse Yates 
> wrote:
>
> > Yeah, I've done that a handful of times in HBase-land (not sure where
> > though). It gets tricky with phoenix using all the BaseTest stuff because
> > it does a lot of setup things that could conflict with what you are
> trying
> > to do.*
> >
> > What I was frequently doing was using a static "latch" for turning on/off
> > errors since there are a lot of reads/writes that happen on startup that
> > you don't want to interfere with. Then you trip the latch when the test
> > starts (avoiding any errors setting up .META. or -ROOT-) and you are good
> > to go.
> >
> > However, in HBase-land we already run mini-cluster things in separate
> JVMs,
> > so the static use is just easier; in Phoenix this may not be as feasible.
> > The
> > alternative is to get the coprocessors from the coprocessor environment
> of
> > the regionserver in the test and pull out the latch from there.
> >
> > -J
> >
> > * This has been an issue when working on an internal project using
> Phoenix-
> > we wanted to use a bunch of the BaseTest methods, but not all of them,
> and
> > extent them a little more - and it was notably uncomfortable to mess
> with;
> > we just ended up copying out what we needed. Something to look at in the
> > future
> >
> > On Mon Feb 09 2015 at 11:01:39 AM Eli Levine 
> wrote:
> >
> > > Greetings Phoenix devs,
> > >
> > > I'm working on https://issues.apache.org/jira/browse/PHOENIX-900
> > (Partial
> > > results for mutations). In order to test this functionality properly, I
> > > need to write one or more tests that simulate write failures in HBase.
> > >
> > > I think this will involve having a test deploy a custom test-only
> > > coprocessor that will cause some predefined writes to fail, which the
> > test
> > > will verify. Does that sound like the right approach? Any examples of
> > > similar tests in Phoenix or anywhere else in HBase-land?
> > >
> > > Thanks,
> > >
> > > Eli
> > >
> >
>


Re: Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Eli Levine
Thanks, Jesse. Very useful. Any pointers to specific tests that spin up
Coprocessors dynamically in Phoenix?

On Mon, Feb 9, 2015 at 11:51 AM, Jesse Yates 
wrote:

> Yeah, I've done that a handful of times in HBase-land (not sure where
> though). It gets tricky with phoenix using all the BaseTest stuff because
> it does a lot of setup things that could conflict with what you are trying
> to do.*
>
> What I was frequently doing was using a static "latch" for turning on/off
> errors since there are a lot of reads/writes that happen on startup that
> you don't want to interfere with. Then you trip the latch when the test
> starts (avoiding any errors setting up .META. or -ROOT-) and you are good
> to go.
>
> However, in HBase-land we already run mini-cluster things in separate JVMs,
> so the static use is just easier; in Phoenix this may not be as feasible.
> The
> alternative is to get the coprocessors from the coprocessor environment of
> the regionserver in the test and pull out the latch from there.
>
> -J
>
> * This has been an issue when working on an internal project using Phoenix-
> we wanted to use a bunch of the BaseTest methods, but not all of them, and
> extent them a little more - and it was notably uncomfortable to mess with;
> we just ended up copying out what we needed. Something to look at in the
> future
>
> On Mon Feb 09 2015 at 11:01:39 AM Eli Levine  wrote:
>
> > Greetings Phoenix devs,
> >
> > I'm working on https://issues.apache.org/jira/browse/PHOENIX-900
> (Partial
> > results for mutations). In order to test this functionality properly, I
> > need to write one or more tests that simulate write failures in HBase.
> >
> > I think this will involve having a test deploy a custom test-only
> > coprocessor that will cause some predefined writes to fail, which the
> test
> > will verify. Does that sound like the right approach? Any examples of
> > similar tests in Phoenix or anywhere else in HBase-land?
> >
> > Thanks,
> >
> > Eli
> >
>


Re: Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Jesse Yates
Yeah, I've done that a handful of times in HBase-land (not sure where
though). It gets tricky with phoenix using all the BaseTest stuff because
it does a lot of setup things that could conflict with what you are trying
to do.*

What I was frequently doing was using a static "latch" for turning on/off
errors since there are a lot of reads/writes that happen on startup that
you don't want to interfere with. Then you trip the latch when the test
starts (avoiding any errors setting up .META. or -ROOT-) and you are good
to go.

However, in HBase-land we already run mini-cluster things in separate JVMs,
so the static use is just easier; in Phoenix this may not be as feasible. The
alternative is to get the coprocessors from the coprocessor environment of
the regionserver in the test and pull out the latch from there.

-J

* This has been an issue when working on an internal project using Phoenix-
we wanted to use a bunch of the BaseTest methods, but not all of them, and
extent them a little more - and it was notably uncomfortable to mess with;
we just ended up copying out what we needed. Something to look at in the
future

On Mon Feb 09 2015 at 11:01:39 AM Eli Levine  wrote:

> Greetings Phoenix devs,
>
> I'm working on https://issues.apache.org/jira/browse/PHOENIX-900 (Partial
> results for mutations). In order to test this functionality properly, I
> need to write one or more tests that simulate write failures in HBase.
>
> I think this will involve having a test deploy a custom test-only
> coprocessor that will cause some predefined writes to fail, which the test
> will verify. Does that sound like the right approach? Any examples of
> similar tests in Phoenix or anywhere else in HBase-land?
>
> Thanks,
>
> Eli
>


Simulating HBase write failures in Phoenix tests

2015-02-09 Thread Eli Levine
Greetings Phoenix devs,

I'm working on https://issues.apache.org/jira/browse/PHOENIX-900 (Partial
results for mutations). In order to test this functionality properly, I
need to write one or more tests that simulate write failures in HBase.

I think this will involve having a test deploy a custom test-only
coprocessor that will cause some predefined writes to fail, which the test
will verify. Does that sound like the right approach? Any examples of
similar tests in Phoenix or anywhere else in HBase-land?

Thanks,

Eli


[jira] [Commented] (PHOENIX-1645) Wrong execution plan generated for indexed query which leads to slow performance

2015-02-09 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14312584#comment-14312584
 ] 

Mujtaba Chohan commented on PHOENIX-1645:
-

That looks fine/inline with previous runs.

> Wrong execution plan generated for indexed query which leads to slow 
> performance
> 
>
> Key: PHOENIX-1645
> URL: https://issues.apache.org/jira/browse/PHOENIX-1645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.3
>
>
> Query: select /*+ INDEX(INDEXED_TABLE idx1 idx2 idx3 idx4) */ count(core) 
> from INDEXED_TABLE where core < 10 and db < 200
> Optimal explain plan generated in Phoenix v4.2: 1-CHUNK PARALLEL 1-WAY RANGE 
> SCAN OVER IDX4 [*] - [10]
> SERVER FILTER BY TO_LONG(DB) < 200
> SERVER AGGREGATE INTO SINGLE ROW
> *Wrong plan generated in 4.3 that uses skip scan join to base table. 
> Performance of this plan compared to v4.2 is close to 20X slower with 2M rows 
> data*: CLIENT 28-CHUNK PARALLEL 1-WAY FULL SCAN OVER INDEXED_TABLE
> SERVER FILTER BY USAGE.DB < 200
> SERVER AGGREGATE INTO SINGLE ROW
> SKIP-SCAN-JOIN TABLE 0
> CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER IDX1 [*] - [10]
> SERVER FILTER BY FIRST KEY ONLY
> DYNAMIC SERVER FILTER BY ("HOST", "DOMAIN", "FEATURE", "DATE") IN 
> (($22.$24, $22.$25, $22.$26, $22.$27))
>  
> DDL: CREATE TABLE $TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT 
> NULL,FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760;CREATE INDEX idx1 
> ON $TABLE (CORE);CREATE INDEX idx2 ON $TABLE (DB);CREATE INDEX idx3 ON $TABLE 
> (DB,ACTIVE_VISITOR);CREATE INDEX idx4 ON $TABLE 
> (CORE,DB,ACTIVE_VISITOR);CREATE INDEX ids1 ON $TABLE (CORE) 
> SALT_BUCKETS=16;CREATE INDEX ids2 ON $TABLE (DB) SALT_BUCKETS=16;CREATE INDEX 
> ids3 ON $TABLE (DB,ACTIVE_VISITOR) SALT_BUCKETS=16;CREATE INDEX ids4 ON 
> $TABLE (CORE,DB,ACTIVE_VISITOR) SALT_BUCKETS=16;
> Also see perf. run at: 
> http://phoenix-bin.github.io/client/performance/phoenix-20150206042353.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1645) Wrong execution plan generated for indexed query which leads to slow performance

2015-02-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14312571#comment-14312571
 ] 

James Taylor commented on PHOENIX-1645:
---

How does perf look without the hint? Just want to make sure there's no perf
regression hiding behind this.


> Wrong execution plan generated for indexed query which leads to slow 
> performance
> 
>
> Key: PHOENIX-1645
> URL: https://issues.apache.org/jira/browse/PHOENIX-1645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.3
>
>
> Query: select /*+ INDEX(INDEXED_TABLE idx1 idx2 idx3 idx4) */ count(core) 
> from INDEXED_TABLE where core < 10 and db < 200
> Optimal explain plan generated in Phoenix v4.2: 1-CHUNK PARALLEL 1-WAY RANGE 
> SCAN OVER IDX4 [*] - [10]
> SERVER FILTER BY TO_LONG(DB) < 200
> SERVER AGGREGATE INTO SINGLE ROW
> *Wrong plan generated in 4.3 that uses skip scan join to base table. 
> Performance of this plan compared to v4.2 is close to 20X slower with 2M rows 
> data*: CLIENT 28-CHUNK PARALLEL 1-WAY FULL SCAN OVER INDEXED_TABLE
> SERVER FILTER BY USAGE.DB < 200
> SERVER AGGREGATE INTO SINGLE ROW
> SKIP-SCAN-JOIN TABLE 0
> CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER IDX1 [*] - [10]
> SERVER FILTER BY FIRST KEY ONLY
> DYNAMIC SERVER FILTER BY ("HOST", "DOMAIN", "FEATURE", "DATE") IN 
> (($22.$24, $22.$25, $22.$26, $22.$27))
>  
> DDL: CREATE TABLE $TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT 
> NULL,FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760;CREATE INDEX idx1 
> ON $TABLE (CORE);CREATE INDEX idx2 ON $TABLE (DB);CREATE INDEX idx3 ON $TABLE 
> (DB,ACTIVE_VISITOR);CREATE INDEX idx4 ON $TABLE 
> (CORE,DB,ACTIVE_VISITOR);CREATE INDEX ids1 ON $TABLE (CORE) 
> SALT_BUCKETS=16;CREATE INDEX ids2 ON $TABLE (DB) SALT_BUCKETS=16;CREATE INDEX 
> ids3 ON $TABLE (DB,ACTIVE_VISITOR) SALT_BUCKETS=16;CREATE INDEX ids4 ON 
> $TABLE (CORE,DB,ACTIVE_VISITOR) SALT_BUCKETS=16;
> Also see perf. run at: 
> http://phoenix-bin.github.io/client/performance/phoenix-20150206042353.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1645) Wrong execution plan generated for indexed query which leads to slow performance

2015-02-09 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan resolved PHOENIX-1645.
-
   Resolution: Invalid
Fix Version/s: 4.3

[~jamestaylor] Using hint for idx4 /*+ INDEX(INDEXED_TABLE idx4) */ uses idx4 
correctly. So take away is that since join back to base table is now possible, 
query plan is correctly using idx1 since it is the first hint in the original 
query but since it's user override it is as expected. Closing this JIRA as it 
is behaving as expected in 4.3.

> Wrong execution plan generated for indexed query which leads to slow 
> performance
> 
>
> Key: PHOENIX-1645
> URL: https://issues.apache.org/jira/browse/PHOENIX-1645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3
>Reporter: Mujtaba Chohan
>Assignee: James Taylor
> Fix For: 4.3
>
>
> Query: select /*+ INDEX(INDEXED_TABLE idx1 idx2 idx3 idx4) */ count(core) 
> from INDEXED_TABLE where core < 10 and db < 200
> Optimal explain plan generated in Phoenix v4.2: 1-CHUNK PARALLEL 1-WAY RANGE 
> SCAN OVER IDX4 [*] - [10]
> SERVER FILTER BY TO_LONG(DB) < 200
> SERVER AGGREGATE INTO SINGLE ROW
> *Wrong plan generated in 4.3 that uses skip scan join to base table. 
> Performance of this plan compared to v4.2 is close to 20X slower with 2M rows 
> data*: CLIENT 28-CHUNK PARALLEL 1-WAY FULL SCAN OVER INDEXED_TABLE
> SERVER FILTER BY USAGE.DB < 200
> SERVER AGGREGATE INTO SINGLE ROW
> SKIP-SCAN-JOIN TABLE 0
> CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER IDX1 [*] - [10]
> SERVER FILTER BY FIRST KEY ONLY
> DYNAMIC SERVER FILTER BY ("HOST", "DOMAIN", "FEATURE", "DATE") IN 
> (($22.$24, $22.$25, $22.$26, $22.$27))
>  
> DDL: CREATE TABLE $TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT 
> NULL,FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760;CREATE INDEX idx1 
> ON $TABLE (CORE);CREATE INDEX idx2 ON $TABLE (DB);CREATE INDEX idx3 ON $TABLE 
> (DB,ACTIVE_VISITOR);CREATE INDEX idx4 ON $TABLE 
> (CORE,DB,ACTIVE_VISITOR);CREATE INDEX ids1 ON $TABLE (CORE) 
> SALT_BUCKETS=16;CREATE INDEX ids2 ON $TABLE (DB) SALT_BUCKETS=16;CREATE INDEX 
> ids3 ON $TABLE (DB,ACTIVE_VISITOR) SALT_BUCKETS=16;CREATE INDEX ids4 ON 
> $TABLE (CORE,DB,ACTIVE_VISITOR) SALT_BUCKETS=16;
> Also see perf. run at: 
> http://phoenix-bin.github.io/client/performance/phoenix-20150206042353.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1646) Views and functional index expressions may lose information when stringified

2015-02-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1646:
--
Attachment: PHOENIX-1646-wip.patch

Parking this wip patch here.

> Views and functional index expressions may lose information when stringified
> 
>
> Key: PHOENIX-1646
> URL: https://issues.apache.org/jira/browse/PHOENIX-1646
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-1646-wip.patch
>
>
> We currently produce a string from an Expression to store in the system 
> catalog for views and functional indexes. However there are a number of 
> constructs that won't roundtrip correctly, mainly due to the way expression 
> trees get collapsed during compilation. The easiest way to fix this is to go 
> from the ParseNode to a string instead and fully resolve column names in the 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1647) Fully qualified tablename query support in Phoenix

2015-02-09 Thread suraj misra (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

suraj misra updated PHOENIX-1647:
-
Description: 
I am able to execute queries having fully qualified names in table names. For 
example:

UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')

But when I look at the phoenix driver implementation, I can see that 
implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
always return false.

As per JDBC documentation, this method retrieves whether a schema name can be 
used in a data manipulation statement.But as you can see in above example, I 
can execute DML statements with schema names as well along with other 
statements. 

Could someone please let me know if there is any specific reason to keep it as 
false. 

  was:
I am able to execute queries having fully qualified names in table names. For 
example:

UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')

But when I look at the phoenix driver implementation, I can see that 
implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
always return false.

As per JDBC documentation, this method retrieves whether a schema name can be 
used in a data manipulation statement.But as you can see in above example, I 
can execute DML statements with schema names as well along with other 
statements. 


> Fully qualified tablename query support in Phoenix
> --
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)