[VOTE] Release of Apache Phoenix 4.7.0-HBase-1.1 RC0

2016-01-23 Thread James Taylor
Hello Everyone,

This is a call for a vote on Apache Phoenix 4.7.0-HBase-1.1 RC0. This is
the next minor release of Phoenix 4, compatible with Apache HBase 1.1+. The
release includes both a source-only release and a convenience binary
release.

This release has feature parity with our other pending 4.7.0 releases and
includes the following improvements:
- ACID transaction support (beta) [1]
- Statistics improvements [2]
- Performance improvements [3][4][5]
- 150+ bug fixes

The source tarball, including signatures, digests, etc can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-1.1-rc0/src/

The binary artifacts can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-1.1-rc0/bin/

For a complete list of changes, see:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120&version=12333998

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/mujtaba.asc

KEYS file available here:
https://dist.apache.org/repos/dist/release/phoenix/KEYS

The hash and tag to be voted upon:
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=551cc7db93a8a2c3cc9ff15e7cf9425e311ab125
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.7.0-HBase-1.1-rc0

Vote will be open until at least, Wed, Jan 27th @ 5pm PST. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team

[1] https://phoenix.apache.org/transactions.html
[2] https://issues.apache.org/jira/browse/PHOENIX-2430
[3] https://issues.apache.org/jira/browse/PHOENIX-1428
[4] https://issues.apache.org/jira/browse/PHOENIX-2377
[5] https://issues.apache.org/jira/browse/PHOENIX-2520


[VOTE] Release of Apache Phoenix 4.7.0-HBase-1.0 RC0

2016-01-23 Thread James Taylor
Hello Everyone,

This is a call for a vote on Apache Phoenix 4.7.0-HBase-1.0 RC0. This is
the next minor release of Phoenix 4, compatible with Apache HBase 1.0+. The
release includes both a source-only release and a convenience binary
release.

This release has feature parity with our other pending 4.7.0 releases and
includes the following improvements:
- ACID transaction support (beta) [1]
- Statistics improvements [2]
- Performance improvements [3][4][5]
- 150+ bug fixes

The source tarball, including signatures, digests, etc can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-1.0-rc0/src/

The binary artifacts can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-1.0-rc0/bin/

For a complete list of changes, see:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120&version=12333998

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/mujtaba.asc

KEYS file available here:
https://dist.apache.org/repos/dist/release/phoenix/KEYS

The hash and tag to be voted upon:
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=76f4507d37bb24cb443b42c1fa2f3c97c1f085da
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.7.0-HBase-1.0-rc0

Vote will be open until at least, Wed, Jan 27th @ 5pm PST. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team

[1] https://phoenix.apache.org/transactions.html
[2] https://issues.apache.org/jira/browse/PHOENIX-2430
[3] https://issues.apache.org/jira/browse/PHOENIX-1428
[4] https://issues.apache.org/jira/browse/PHOENIX-2377
[5] https://issues.apache.org/jira/browse/PHOENIX-2520


[VOTE] Release of Apache Phoenix 4.7.0-HBase-0.98 RC0

2016-01-23 Thread James Taylor
Hello Everyone,

This is a call for a vote on Apache Phoenix 4.7.0-HBase-0.98 RC0. This is
the next minor release of Phoenix 4, compatible with Apache HBase 0.98+.
The release includes both a source-only release and a convenience binary
release.

This release has feature parity with our other pending 4.7.0 releases and
includes the following improvements:
- ACID transaction support (beta) [1]
- Statistics improvements [2]
- Performance improvements [3][4][5]
- 150+ bug fixes

The source tarball, including signatures, digests, etc can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-0.98-rc0/src/

The binary artifacts can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-0.98-rc0/bin/

For a complete list of changes, see:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120&version=12333998

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/mujtaba.asc

KEYS file available here:
https://dist.apache.org/repos/dist/release/phoenix/KEYS

The hash and tag to be voted upon:
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=324ec71fa947ef7840b10ca7fde1352ecf91c824
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.7.0-HBase-0.98-rc0

Vote will be open until at least, Wed, Jan 27th @ 5pm PST. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team

[1] https://phoenix.apache.org/transactions.html
[2] https://issues.apache.org/jira/browse/PHOENIX-2430
[3] https://issues.apache.org/jira/browse/PHOENIX-1428
[4] https://issues.apache.org/jira/browse/PHOENIX-2377
[5] https://issues.apache.org/jira/browse/PHOENIX-2520


[jira] [Updated] (PHOENIX-2440) Document transactional behavior

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2440:
--
Description: 
We need to add a new top level page to explain transactions and their behavior 
from the user point of view:
- How to configure a new/existing table to be transactional (CREATE TABLE and 
ALTER TABLE ... TRANSACTIONAL=true). Also that it's currently a one-way street 
- you cannot make a transactional table non transactional (PHOENIX-2439).
- Transaction implicitly started when first statement run against a 
transactional table
- Indexes on transactional table are transactional with data table.
- Exception raised if transaction committed that conflicts (at row level) with 
other committed transaction.
- Queries do not see commits of other transactions until transaction is ended 
(rolled back or committed).
- Transaction will see their own updates (read-your-own-write behavior).
- Requires that transaction manager is started (and how to start it).
- Various configuration options specific to transactions
- Invalid list - how it can potentially grow and how to manually clear it if 
necessary.
- Unsupported behavior: for example, certain TTL behavior, no SCN.

We also need to adjust existing web pages that currently says we don't support 
full ACID and promote this new capability.



  was:
We need to add a new top level page to explain transactions and their behavior 
from the user point of view:
- How to configure a new/existing table to be transactional (CREATE TABLE and 
ALTER TABLE ... TRANSACTIONAL=true). Also that it's currently a one-way street 
- you cannot make a transactional table non transactional (PHOENIX-2439).
- Transaction implicitly started when first statement run against a 
transactional table
- Indexes on transactional table are transactional with data table.
- Exception raised if transaction committed that conflicts (at row level) with 
other committed transaction.
- REPEATABLE_READ: Queries do not see commits of other transactions until 
transaction is ended (rolled back or committed).
- Transaction will see their own updates (read-your-own-write behavior).
- Requires that transaction manager is started (and how to start it).
- Various configuration options specific to transactions
- Invalid list - how it can potentially grow and how to manually clear it if 
necessary.
- Unsupported behavior: for example, certain TTL behavior, no SCN.

We also need to adjust existing web pages that currently says we don't support 
full ACID and promote this new capability.




> Document transactional behavior
> ---
>
> Key: PHOENIX-2440
> URL: https://issues.apache.org/jira/browse/PHOENIX-2440
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>
> We need to add a new top level page to explain transactions and their 
> behavior from the user point of view:
> - How to configure a new/existing table to be transactional (CREATE TABLE and 
> ALTER TABLE ... TRANSACTIONAL=true). Also that it's currently a one-way 
> street - you cannot make a transactional table non transactional 
> (PHOENIX-2439).
> - Transaction implicitly started when first statement run against a 
> transactional table
> - Indexes on transactional table are transactional with data table.
> - Exception raised if transaction committed that conflicts (at row level) 
> with other committed transaction.
> - Queries do not see commits of other transactions until transaction is ended 
> (rolled back or committed).
> - Transaction will see their own updates (read-your-own-write behavior).
> - Requires that transaction manager is started (and how to start it).
> - Various configuration options specific to transactions
> - Invalid list - how it can potentially grow and how to manually clear it if 
> necessary.
> - Unsupported behavior: for example, certain TTL behavior, no SCN.
> We also need to adjust existing web pages that currently says we don't 
> support full ACID and promote this new capability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2440) Document transactional behavior

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113869#comment-15113869
 ] 

James Taylor commented on PHOENIX-2440:
---

[~tdsilva] - I started the documentation off here: 
https://phoenix.apache.org/transactions.html with the minimum so that users can 
at least try transactions with our RC. Needs to be expanded upon to include all 
of the above (i.e. I didn't mention conflict detection and other details). 
Probably more examples would be good. Maybe a link to what ACID means (on 
wikipedia)?

> Document transactional behavior
> ---
>
> Key: PHOENIX-2440
> URL: https://issues.apache.org/jira/browse/PHOENIX-2440
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>
> We need to add a new top level page to explain transactions and their 
> behavior from the user point of view:
> - How to configure a new/existing table to be transactional (CREATE TABLE and 
> ALTER TABLE ... TRANSACTIONAL=true). Also that it's currently a one-way 
> street - you cannot make a transactional table non transactional 
> (PHOENIX-2439).
> - Transaction implicitly started when first statement run against a 
> transactional table
> - Indexes on transactional table are transactional with data table.
> - Exception raised if transaction committed that conflicts (at row level) 
> with other committed transaction.
> - Queries do not see commits of other transactions until transaction is ended 
> (rolled back or committed).
> - Transaction will see their own updates (read-your-own-write behavior).
> - Requires that transaction manager is started (and how to start it).
> - Various configuration options specific to transactions
> - Invalid list - how it can potentially grow and how to manually clear it if 
> necessary.
> - Unsupported behavior: for example, certain TTL behavior, no SCN.
> We also need to adjust existing web pages that currently says we don't 
> support full ACID and promote this new capability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2156) Support drop of column from table with views

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2156:
--
Fix Version/s: 4.8.0

> Support drop of column from table with views
> 
>
> Key: PHOENIX-2156
> URL: https://issues.apache.org/jira/browse/PHOENIX-2156
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> Much like PHOENIX-1504 allows a column to be added to a base view, we should 
> support dropping a column from a table that has views as well. These seems 
> like a simpler problem: you just need to query for all views with a 
> BASE_COLUMN_COUNT < dropped_column_ordinal_pos, decrement the ordinal 
> positions of columns after that, and drop indexes that reference the column 
> being dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2156) Support drop of column from table with views

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2156:
--
Assignee: Thomas D'Silva

> Support drop of column from table with views
> 
>
> Key: PHOENIX-2156
> URL: https://issues.apache.org/jira/browse/PHOENIX-2156
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.8.0
>
>
> Much like PHOENIX-1504 allows a column to be added to a base view, we should 
> support dropping a column from a table that has views as well. These seems 
> like a simpler problem: you just need to query for all views with a 
> BASE_COLUMN_COUNT < dropped_column_ordinal_pos, decrement the ordinal 
> positions of columns after that, and drop indexes that reference the column 
> being dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2156) Support drop of column from table with views

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113873#comment-15113873
 ] 

James Taylor commented on PHOENIX-2156:
---

I think supporting this would be pretty trivial and it's strange that you can 
add columns to a base table but not remove them. IMHO, we should target this 
for the next release to round out this functionality.

> Support drop of column from table with views
> 
>
> Key: PHOENIX-2156
> URL: https://issues.apache.org/jira/browse/PHOENIX-2156
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.8.0
>
>
> Much like PHOENIX-1504 allows a column to be added to a base view, we should 
> support dropping a column from a table that has views as well. These seems 
> like a simpler problem: you just need to query for all views with a 
> BASE_COLUMN_COUNT < dropped_column_ordinal_pos, decrement the ordinal 
> positions of columns after that, and drop indexes that reference the column 
> being dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2565) Store data for immutable tables in single KeyValue

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2565:
--
Fix Version/s: 4.8.0

> Store data for immutable tables in single KeyValue
> --
>
> Key: PHOENIX-2565
> URL: https://issues.apache.org/jira/browse/PHOENIX-2565
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.8.0
>
>
> Since an immutable table (i.e. declared with IMMUTABLE_ROWS=true) will never 
> update a column value, it'd be more efficient to store all column values for 
> a row in a single KeyValue. We could use the existing format we have for 
> variable length arrays.
> For backward compatibility, we'd need to support the current mechanism. Also, 
> you'd no longer be allowed to transition an existing table to/from being 
> immutable. I think the best approach would be to introduce a new IMMUTABLE 
> keyword and use it like this:
> {code}
> CREATE IMMUTABLE TABLE ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1598) encode column names to save space

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1598:
--
Fix Version/s: 4.8.0

> encode column names to save space 
> --
>
> Key: PHOENIX-1598
> URL: https://issues.apache.org/jira/browse/PHOENIX-1598
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: noam bulvik
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
>
> when creating table using phoenix DDL replace the column names that the user 
> give with shorter names to save space. the user will still the full name is 
> his select statements and will get them in the result set but under the hood 
> the infra will translate the names to their sorter version.
> example:
> when creating table with my_column_1, my_column_2 ... the table will be 
> created with a as first column , b as the second one etc'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1311) HBase namespaces surfaced in phoenix

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1311:
--
Fix Version/s: 4.8.0

> HBase namespaces surfaced in phoenix
> 
>
> Key: PHOENIX-1311
> URL: https://issues.apache.org/jira/browse/PHOENIX-1311
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Ankit Singhal
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1311_wip.patch
>
>
> Hbase (HBASE-8015) has the concept of namespaces in the form of 
> myNamespace:MyTable it would be great if Phoenix leveraged this feature to 
> give a database like feature on top of the table.
> Maybe to stay close to Hbase it could also be a create DB:Table...
> or DB.Table which is a more standard annotation?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2571) Support specifying a default schema

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2571:
--
Fix Version/s: 4.8.0

> Support specifying a default schema
> ---
>
> Key: PHOENIX-2571
> URL: https://issues.apache.org/jira/browse/PHOENIX-2571
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> Based on idea from comment in PHOENIX-1966. If we can provide a means of 
> setting the default schema (either in a hbase-site.xml config file or through 
> a new SQL statement), then we can provide a simple means of having different 
> HBase namespaces. We want to be careful that this functionality won't 
> conflict with PHOENIX-1311 which is a bigger effort than this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2571) Support specifying a default schema

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2571:
--
Assignee: Ankit Singhal

> Support specifying a default schema
> ---
>
> Key: PHOENIX-2571
> URL: https://issues.apache.org/jira/browse/PHOENIX-2571
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
>
> Based on idea from comment in PHOENIX-1966. If we can provide a means of 
> setting the default schema (either in a hbase-site.xml config file or through 
> a new SQL statement), then we can provide a simple means of having different 
> HBase namespaces. We want to be careful that this functionality won't 
> conflict with PHOENIX-1311 which is a bigger effort than this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2620) NPE in org.apache.phoenix.expression.util.regex.JavaPattern.matches

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113915#comment-15113915
 ] 

James Taylor commented on PHOENIX-2620:
---

I believe this is fixed already. Would you mind confirming, [~nkeywal]?

> NPE in org.apache.phoenix.expression.util.regex.JavaPattern.matches
> ---
>
> Key: PHOENIX-2620
> URL: https://issues.apache.org/jira/browse/PHOENIX-2620
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Nicolas Liochon
> Fix For: 4.8.0
>
> Attachments: phoenix.patch
>
>
> The error shows up w/ likes & dynamic columns for example:
> {code}
> ht.put(new Put("1".getBytes()).addImmutable(CF, "C2".getBytes(), 
> "".getBytes()));
> stmt.executeQuery("SELECT count(1) from PB(F.C2 VARCHAR) where C2 like 
> 'a%'")) {
> {code}
> gives a stack w/
> {code}
> Caused by: java.lang.NullPointerException
> at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
> at java.util.regex.Matcher.reset(Matcher.java:309)
> at java.util.regex.Matcher.(Matcher.java:229)
> at java.util.regex.Pattern.matcher(Pattern.java:1093)
> at 
> org.apache.phoenix.expression.util.regex.JavaPattern.matches(JavaPattern.java:51)
> at 
> org.apache.phoenix.expression.LikeExpression.evaluate(LikeExpression.java:297)
> at 
> org.apache.phoenix.filter.BooleanExpressionFilter.evaluate(BooleanExpressionFilter.java:93)
> at 
> org.apache.phoenix.filter.SingleKeyValueComparisonFilter.filterKeyValue(SingleKeyValueComparisonFilter.java:93)
> at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2520) Create DDL property for metadata update frequency

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2520:
--
Description: 
On the client-side, Phoenix pings the server when a query is compiled to 
confirm that the client has the most up-to-date metadata for the table being 
queried. For some tables that are known to not change, this RPC is wasteful. 

We can allow a property such as {{UPDATE_CACHE_FREQUENCY}} to be specified a 
time to wait before checking with the server to see if the metadata has 
changed. This could be specified in the CREATE TABLE call and stored in the 
SYSTEM.CATALOG table header row. By default the value could be 0 which would 
keep the current behavior. Tables that never change could use Long.MAX_VALUE. 
Potentially we could allow 'ALWAYS' and 'NEVER' values for convenience.

Proposed implementation:
- add {{public long getAge()}} method to {{PTableRef}}.
- when setting lastAccessTime, also store System.currentMillis() to new 
{{setAccessTime}} private member variable
- the getAge() would return {{System.currentMillis() - setAccessTime}}
- code in MetaDataClient would prevent call to server if age < 
{{UPDATE_CACHE_FREQUENCY}}

  was:
On the client-side, Phoenix pings the server when a query is compiled to 
confirm that the client has the most up-to-date metadata for the table being 
queried. For some tables that are known to not change, this RPC is wasteful. 

We can allow a property such as {{UPDATE_METADATA_CACHE_FREQUENCY_MS}} to be 
specified a time to wait before checking with the server to see if the metadata 
has changed. This could be specified in the CREATE TABLE call and stored in the 
SYSTEM.CATALOG table header row. By default the value could be 0 which would 
keep the current behavior. Tables that never change could use Long.MAX_VALUE. 
Potentially we could allow 'ALWAYS' and 'NEVER' values for convenience.

Proposed implementation:
- add {{public long getAge()}} method to {{PTableRef}}.
- when setting lastAccessTime, also store System.currentMillis() to new 
{{setAccessTime}} private member variable
- the getAge() would return {{System.currentMillis() - setAccessTime}}
- code in MetaDataClient would prevent call to server if age < 
{{UPDATE_METADATA_CACHE_FREQUENCY_MS}}


> Create DDL property for metadata update frequency
> -
>
> Key: PHOENIX-2520
> URL: https://issues.apache.org/jira/browse/PHOENIX-2520
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2520.patch, PHOENIX-2520_v2.patch, 
> PHOENIX-2520_wip.patch, preferMetaCache.patch
>
>
> On the client-side, Phoenix pings the server when a query is compiled to 
> confirm that the client has the most up-to-date metadata for the table being 
> queried. For some tables that are known to not change, this RPC is wasteful. 
> We can allow a property such as {{UPDATE_CACHE_FREQUENCY}} to be specified a 
> time to wait before checking with the server to see if the metadata has 
> changed. This could be specified in the CREATE TABLE call and stored in the 
> SYSTEM.CATALOG table header row. By default the value could be 0 which would 
> keep the current behavior. Tables that never change could use Long.MAX_VALUE. 
> Potentially we could allow 'ALWAYS' and 'NEVER' values for convenience.
> Proposed implementation:
> - add {{public long getAge()}} method to {{PTableRef}}.
> - when setting lastAccessTime, also store System.currentMillis() to new 
> {{setAccessTime}} private member variable
> - the getAge() would return {{System.currentMillis() - setAccessTime}}
> - code in MetaDataClient would prevent call to server if age < 
> {{UPDATE_CACHE_FREQUENCY}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2607) PhoenixMapReduceUtil Upserts with earlier ts (relative to latest data ts) slower by 25x after stats collection

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113919#comment-15113919
 ] 

James Taylor commented on PHOENIX-2607:
---

@ Arun Thangamani - It might be caused by our metadata cache not being used (as 
we only cache the latest version of a table and when you go back in time we 
don't know if the cache contains the schema associated with the earlier 
timestamp). If your schema for your table is not changing, try taking advantage 
of PHOENIX-2520 (available in the RC we just put out for 4.7.0) by altering 
your table like this:
{code}
ALTER TABLE your_table_name SET UPDATE_CACHE_FREQUENCY=NEVER;
{code}

> PhoenixMapReduceUtil Upserts with earlier ts (relative to latest data ts) 
> slower by 25x after stats collection
> --
>
> Key: PHOENIX-2607
> URL: https://issues.apache.org/jira/browse/PHOENIX-2607
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Arun Thangamani
> Attachments: hbase-master-fast-upload.log, 
> hbase-master-slow-upload.log, hbase-rs01-fast-upload.log, 
> hbase-rs01-slow-upload.log, hbase-rs02-fast-upload.log, 
> hbase-rs02-slow-upload.log, hbase-rs03-fast-upload.log, 
> hbase-rs03-slow-upload.log, hbase-rs04-fast-upload.log, 
> hbase-rs04-slow-upload.log, phoenix_slow_map_process_jstack.txt, 
> region_server_2_jstack.txt
>
>
> Description of the problem:
> 1) We face a 25x slow down when go back in time to load data in a table (when 
> specific timestamps set on connections during upserts)
> 2) set phoenix.stats.useCurrentTime=false (and 
> phoenix.stats.guidepost.per.region 1) which at least makes the forward 
> timestamps upserts perform correctly 
> 3) From what I can tell from the phoenix source code, logs attached and 
> jstacks from the region servers -- we continuously try to lookup the uncached 
> definition of the table when we have client timestamp earlier than the last 
> modified timestamp of the table in stats 
> 4) To reproduce, create a table with timestamp=100, and load 10M rows with 
> PhoenixMapReduceUtil and timestamps=144757440,144809280, wait for 20 
> mins (15+ min, phoenix.stats.updateFrequency is 15mins)
> After 20 mins, load 10M rows with a earlier timestamp compared to the latest 
> data (timestamp=144766080) and observe the 25x slowness, after this once 
> again load a forward timestamp 144817920 and observe the quickness
> 5) I was not able to reproduce this issue with simple multi threaded upserts 
> from a jdbc connection, with simple multi threaded upserts the stats table 
> never gets populated unlike PhoenixMapReduceUtil
> We are trying to use phoenix as a cache store to do analytics with the last 
> 60 days of data, a total of about 1.5 billion rows
> The table has a composite key and the data arrives in different times from 
> different sources, so it is easier to maintain the timestamps of the data and 
> expire the data automatically, this performance makes a difference between 
> inserting the data in 10 mins versus 2 hours, 2 hours for data inserts 
> blocking up the cluster that we have.
> We are even talking about our use cases in the upcoming strata conference in 
> March..  (Thanks to excellent community)
> Steps to reproduce:
> Source code is available in 
> (https://github.com/athangamani/phoenix_mapreduce_timestamp_upsert) and the 
> jar the source code produces is attached which is readily runnable 
> 1) We use the following params to keep the stats collection happy to isolate 
> the specific issue
>  phoenix.stats.useCurrentTime false
>  phoenix.stats.guidepost.per.region 1
> 2) Create a table in phoenix 
>Run the following main class from the project.. 
> (StatPhoenixTableCreationTest).. It will create a table with timestamp=100  
>   CREATE TABLE stat_table ( 
>   pk1 VARCHAR NOT NULL, 
>   pk2 VARCHAR NOT NULL, 
>   pk3 UNSIGNED_LONG NOT NULL, 
>stat1 UNSIGNED_LONG, 
>stat2 UNSIGNED_LONG, 
>stat3 UNSIGNED_LONG, 
>CONSTRAINT pk PRIMARY KEY (pk1, pk2, pk3) 
>   ) SALT_BUCKETS=32, COMPRESSION='LZ4'
> 3) Open the code base to look at the sample for PhoenixMapReduceUtil.. With 
> DBWritable..  
> 4) Within the codebase, we get phoenix connection for the mappers using the 
> following settings in order to have a fixed client timestamp
>  conf.set(PhoenixRuntime.CURRENT_SCN_ATTRIB, ""+(timestamp));
> 5) fix the hbase-site.xml in the codebase for zookeeper quorum and hbase 
> parent znode info 
> 6) simply run the StatDataCreatorTest to create data for the run and load it 
> in hdfs for 10M rec

[jira] [Updated] (PHOENIX-2602) Parser does not handle escaped LPAREN

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2602:
--
Fix Version/s: 4.8.0

> Parser does not handle escaped LPAREN
> -
>
> Key: PHOENIX-2602
> URL: https://issues.apache.org/jira/browse/PHOENIX-2602
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Nick Dimiduk
> Fix For: 4.8.0
>
>
> Seems parsing string literals isn't working quite right.
> {noformat}
> 0: jdbc:phoenix:localhost> explain select foo from bar where foo not like 
> '%\(%' ;
> line 1:50 no viable alternative at character '('
> Error: ERROR 602 (42P00): Syntax error. Missing "LPAREN" at line 1, column 
> 35. (state=42P00,code=602)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): 
> Syntax error. Missing "LPAREN" at line 1, column 35.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1429)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: MissingTokenException(inserted [@-1,0:0=' LPAREN>',<90>,1:34] at foo)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:350)
> at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.not_expression(PhoenixSQLParser.java:6463)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.and_expression(PhoenixSQLParser.java:6283)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.or_expression(PhoenixSQLParser.java:6220)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.expression(PhoenixSQLParser.java:6185)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4388)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4470)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4535)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:766)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.explain_node(PhoenixSQLParser.java:987)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:946)
> at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
> at 
> org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
> ... 9 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2582) Prevent need of catch up query when creating non transactional index

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2582:
--
Summary: Prevent need of catch up query when creating non transactional 
index  (was: Prevent need of catch up query when creating an index)

> Prevent need of catch up query when creating non transactional index
> 
>
> Key: PHOENIX-2582
> URL: https://issues.apache.org/jira/browse/PHOENIX-2582
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>
> If we create an index while we are upserting rows to the table its possible 
> we can miss writing corresponding rows to the index table. 
> If a region server is writing a batch of rows and we create an index just 
> before the batch is written we will miss writing that batch to the index 
> table. This is because we run the inital UPSERT SELECT to populate the index 
> with an SCN that we get from the server which will be before the timestamp 
> the batch of rows is written. 
> We need to figure out if there is a way to determine that are pending batches 
> have been written before running the UPSERT SELECT to do the initial index 
> population.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2581) phoenix don't support NameNode HA connection Hbase

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113924#comment-15113924
 ] 

James Taylor commented on PHOENIX-2581:
---

To my knowledge, Phoenix should work fine with NameNode HA. Can you confirm 
[~enis], [~rajeshbabu], or [~ndimiduk]? My guess is that it's a 
configuration/environmental issue on your end.

> phoenix don't support NameNode HA connection Hbase 
> ---
>
> Key: PHOENIX-2581
> URL: https://issues.apache.org/jira/browse/PHOENIX-2581
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0, 4.6.0
> Environment: Apache Hadoop2.7.1   and NameNode HA
> Hbase0.98.12
> Centos6.5
>Reporter: kim
>  Labels: patch
>
> my hadoop cluster have a Availability NameNode HA , it is a good thing ! but
> when i use phoenix4.6.0 and phoenix 4.5.0 connection hbase  , it is  alaways 
> thorows Exception ,  please attention ns1 is my nameservices, but phoenix 
> seems like as a url  , why ?   when i use phoenix4.2.2 to connection hbase 
> ,it is normal  ,   this is may be a serious bug ,  if not ,please tell me 
> reason , thanks for everyone ! 
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> ns1
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:231)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:510)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:453)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
> at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:204)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2566:
--
Assignee: Thomas D'Silva

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2548) Local Indexing Not Looks Like Working As Expected

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2548:
--
Assignee: Rajeshbabu Chintaguntla

> Local Indexing Not Looks Like Working As Expected
> -
>
> Key: PHOENIX-2548
> URL: https://issues.apache.org/jira/browse/PHOENIX-2548
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Gokhan Cagrici
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>
> Hi,
> We are accessing this table using a variety of different WHERE clauses and 
> for each of them, we are trying to create an appropriate index to avoid full 
> table scan. Since there is going to be almost 20 indexes, we tried to proceed 
> with local indexing since there will be a lot of writes.
> Here is the table definition:
> CREATE TABLE DEVICEDIM_TYPE1 (
>   TENANT_ID VARCHAR NOT NULL,
>   DEVICE_TYPE1_KEY BIGINT NOT NULL,
>   CLASSNAME VARCHAR(64),
>   DAY_IN_MONTH SMALLINT,
>   MONTH_NUMBER SMALLINT,
>   QUARTER_NUMBER SMALLINT,
>   YEAR SMALLINT,
>   WEEK_NUMBER SMALLINT,
>   YEAR_FOR_WEEK SMALLINT,
>   HOUR SMALLINT,
>   MINUTE SMALLINT,
>   IPADDRESS VARCHAR(50),
>   DEVICENAME VARCHAR(255),
>   MACADDRESS VARCHAR(30),
>   CONSTRAINT PK PRIMARY KEY (TENANT_ID, DEVICE_TYPE1_KEY)
> ) SALT_BUCKETS=4, COMPRESSION='GZ', VERSIONS=1, MULTI_TENANT=TRUE;
> And here is the index:
> create local index gokhan_ix2 on devicedim_type1 (devicename, macaddress)
> Now if I execute this:
> explain select devicename from devicedim_type1 where tenant_id = 'ccd' and 
> devicename = 'abc' and macaddress = 'afg'
> Here is the output:
> CLIENT 4-CHUNK PARALLEL 4-WAY RANGE SCAN OVER DEVICEDIM_TYPE1 [0,'ccd']
> SERVER FILTER BY (DEVICENAME = 'abc' AND MACADDRESS = 'afg')
> SERVER 100 ROW LIMIT
> CLIENT 100 ROW LIMIT
> I was expecting the index to be used. Am I wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2547) Spark Data Source API: Filter operation doesn't work for column names containing a white space

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2547:
--
Assignee: Josh Mahonin

> Spark Data Source API: Filter operation doesn't work for column names 
> containing a white space
> --
>
> Key: PHOENIX-2547
> URL: https://issues.apache.org/jira/browse/PHOENIX-2547
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Suhas Nalapure
>Assignee: Josh Mahonin
>Priority: Critical
>
> Dataframe.filter() results in 
> "org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "LPAREN", got "first" at line 1, 
> column 52."  when a column name has a white space in it.
> Steps to Reproduce
> --
> 1. Create a test table & insert a row as below
>create table "space" ("key" varchar primary key, "first name" varchar);
>upsert into "space" values ('key1', 'xyz');
> 2. Java code that leads to the error:
>  //omitting the DataFrame creation part
>df = df.filter(df.col("first name").equalTo("xyz"));
>   System.out.println(df.collectAsList());
> 3. I could see the following statements in the Phoenix logs which may have 
> led to the exception (stack trace given below)
> 2015-12-28 17:52:24,327 INFO  [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> UseSelectColumns=true, selectColumnList.size()=2, selectColumnList=key,first 
> name 
> 2015-12-28 17:52:24,328 INFO  [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> Select Statement: SELECT "key","0"."first name" FROM "space" WHERE ( first 
> name = 'xyz')
> 2015-12-28 17:52:24,333 ERROR [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> Failed to get the query plan with error [ERROR 604 (42P00): Syntax error. 
> Mismatched input. Expecting "LPAREN", got "first" at line 1, column 52.]
> Exception Stack Trace:
> --
> java.lang.RuntimeException: 
> org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "LPAREN", got "first" at line 1, 
> column 52.
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:125)
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:80)
>   at 
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.phoenix.spark.PhoenixRDD.getPartitions(PhoenixRDD.scala:48)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$

[jira] [Commented] (PHOENIX-2522) Transaction tests fail if local hbase is running

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113928#comment-15113928
 ] 

James Taylor commented on PHOENIX-2522:
---

[~tdsilva] - not sure if this is still the case. Would you mind confirming one 
way or the other, please?

> Transaction tests fail if local hbase is running
> 
>
> Key: PHOENIX-2522
> URL: https://issues.apache.org/jira/browse/PHOENIX-2522
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>
> I noticed that if HBase is running locally, the transaction tests fail. We 
> should make sure we're using different or random ports to prevent these 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2521) Support duplicate rows in CSV Bulk Loader

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2521:
--
Summary: Support duplicate rows in CSV Bulk Loader  (was: Index rows are 
not updated when the index key updated using bulk loader )

> Support duplicate rows in CSV Bulk Loader
> -
>
> Key: PHOENIX-2521
> URL: https://issues.apache.org/jira/browse/PHOENIX-2521
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Afshin Moazami
>
>  found out the map reduce csv bulk load tool doesn't behave the same as 
> UPSERTs. Is it by design or a bug?
> Here is the queries for creating table and index:
> {code} CREATE TABLE mySchema.mainTable (
> id varchar NOT NULL,
> name varchar,
> address varchar
> CONSTRAINT pk PRIMARY KEY (id)); {code}
> {code} CREATE INDEX myIndex 
> ON mySchema.mainTable  (name, id) 
> INCLUDE (address); {code}
> if I execute two upserts where the second one update the name (which is the 
> key for index), everything works fine (the record will be updated in both 
> table and index table)
> {code} UPSERT INTO mySchema.mainTable (id, name, address) values ('1', 
> 'john', 'Montreal');{code}
> {code}UPSERT INTO mySchema.mainTable (id, name, address) values ('1', 'jack', 
> 'Montreal');{code}
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'jack'; {code}  ==> one record
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'john';  {code}  ==> zero records
> But, if I load the date using org.apache.phoenix.mapreduce.CsvBulkLoadTool to 
> the main table, it behaves different. The main table will be updated, but the 
> new record will be appended to the index table:
> HADOOP_CLASSPATH=/usr/lib/hbase/lib/hbase-protocol-1.1.2.jar:/etc/hbase/conf 
> hadoop jar  
> /usr/lib/hbase/phoenix-4.5.2-HBase-1.1-bin/phoenix-4.5.2-HBase-1.1-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -d',' -s mySchema -t mainTable 
> -i /tmp/input.txt 
> input.txt:
> 2,tomas,montreal
> 2,george,montreal
> (I have tried it both with/without -it and got the same result)
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'tomas' {code} ==> one record;
> {code} SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'george' {code} ==> one record;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2521) Support duplicate rows in CSV Bulk Loader

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2521:
--
Issue Type: Improvement  (was: Bug)

> Support duplicate rows in CSV Bulk Loader
> -
>
> Key: PHOENIX-2521
> URL: https://issues.apache.org/jira/browse/PHOENIX-2521
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.5.2
>Reporter: Afshin Moazami
>
>  found out the map reduce csv bulk load tool doesn't behave the same as 
> UPSERTs. Is it by design or a bug?
> Here is the queries for creating table and index:
> {code} CREATE TABLE mySchema.mainTable (
> id varchar NOT NULL,
> name varchar,
> address varchar
> CONSTRAINT pk PRIMARY KEY (id)); {code}
> {code} CREATE INDEX myIndex 
> ON mySchema.mainTable  (name, id) 
> INCLUDE (address); {code}
> if I execute two upserts where the second one update the name (which is the 
> key for index), everything works fine (the record will be updated in both 
> table and index table)
> {code} UPSERT INTO mySchema.mainTable (id, name, address) values ('1', 
> 'john', 'Montreal');{code}
> {code}UPSERT INTO mySchema.mainTable (id, name, address) values ('1', 'jack', 
> 'Montreal');{code}
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'jack'; {code}  ==> one record
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'john';  {code}  ==> zero records
> But, if I load the date using org.apache.phoenix.mapreduce.CsvBulkLoadTool to 
> the main table, it behaves different. The main table will be updated, but the 
> new record will be appended to the index table:
> HADOOP_CLASSPATH=/usr/lib/hbase/lib/hbase-protocol-1.1.2.jar:/etc/hbase/conf 
> hadoop jar  
> /usr/lib/hbase/phoenix-4.5.2-HBase-1.1-bin/phoenix-4.5.2-HBase-1.1-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -d',' -s mySchema -t mainTable 
> -i /tmp/input.txt 
> input.txt:
> 2,tomas,montreal
> 2,george,montreal
> (I have tried it both with/without -it and got the same result)
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'tomas' {code} ==> one record;
> {code} SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'george' {code} ==> one record;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2521) Support duplicate rows in CSV Bulk Loader

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113929#comment-15113929
 ] 

James Taylor commented on PHOENIX-2521:
---

That's correct, [~moazami.afs...@gmail.com] - the CSV Bulk Loader does not 
handle the case when there are duplicate rows.

> Support duplicate rows in CSV Bulk Loader
> -
>
> Key: PHOENIX-2521
> URL: https://issues.apache.org/jira/browse/PHOENIX-2521
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.5.2
>Reporter: Afshin Moazami
>
>  found out the map reduce csv bulk load tool doesn't behave the same as 
> UPSERTs. Is it by design or a bug?
> Here is the queries for creating table and index:
> {code} CREATE TABLE mySchema.mainTable (
> id varchar NOT NULL,
> name varchar,
> address varchar
> CONSTRAINT pk PRIMARY KEY (id)); {code}
> {code} CREATE INDEX myIndex 
> ON mySchema.mainTable  (name, id) 
> INCLUDE (address); {code}
> if I execute two upserts where the second one update the name (which is the 
> key for index), everything works fine (the record will be updated in both 
> table and index table)
> {code} UPSERT INTO mySchema.mainTable (id, name, address) values ('1', 
> 'john', 'Montreal');{code}
> {code}UPSERT INTO mySchema.mainTable (id, name, address) values ('1', 'jack', 
> 'Montreal');{code}
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'jack'; {code}  ==> one record
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'john';  {code}  ==> zero records
> But, if I load the date using org.apache.phoenix.mapreduce.CsvBulkLoadTool to 
> the main table, it behaves different. The main table will be updated, but the 
> new record will be appended to the index table:
> HADOOP_CLASSPATH=/usr/lib/hbase/lib/hbase-protocol-1.1.2.jar:/etc/hbase/conf 
> hadoop jar  
> /usr/lib/hbase/phoenix-4.5.2-HBase-1.1-bin/phoenix-4.5.2-HBase-1.1-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -d',' -s mySchema -t mainTable 
> -i /tmp/input.txt 
> input.txt:
> 2,tomas,montreal
> 2,george,montreal
> (I have tried it both with/without -it and got the same result)
> {code}SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'tomas' {code} ==> one record;
> {code} SELECT /*+ INDEX(mySchema.mainTable myIndex) */ * from 
> mySchema.mainTable where name = 'george' {code} ==> one record;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2514) Even with ORDER BY clause the LIMIT does not work correctly with salted tables containing many records.

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113930#comment-15113930
 ] 

James Taylor commented on PHOENIX-2514:
---

[~sumit.nigam] - any luck reproducing this? 

> Even with ORDER BY clause the LIMIT does not work correctly with salted 
> tables containing many records.
> ---
>
> Key: PHOENIX-2514
> URL: https://issues.apache.org/jira/browse/PHOENIX-2514
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.1
> Environment: HBase-0.98.14
>Reporter: Sumit Nigam
>Priority: Critical
>  Labels: LIMIT, hbase, phoenix, salted
> Attachments: data.zip
>
>
> A query such as SELECT CURRENT_TIMESTAMP FROM TBL ORDER BY CURRENT_TIMESTAMP 
> DESC LIMIT 1 does not really return the MAX(CURRENT_TIMESTAMP). The table is 
> salted and has 200272 records.
> select current_timestamp from TBL order by current_timestamp desc limit 1;
> +--+
> |CURRENT_TIMESTAMP|
> +--+
> | 1448815328556|
> +--+
> select max(current_timestamp) from TBL;
> +--+
> | MAX("CURRENT_TIMESTAMP") |
> +--+
> | 1449732792090|
> +--+
> The results are different. MAX is of course, returning the right record.
> The above query is one example. There are other queries which also seem to be 
> returning incorrect record with ORDER BY and LIMIT.
> Is this also correct that when there is a WHERE clause limiting the number of 
> projected records, then LIMIT seems to work fine? I seem to be noticing that 
> also.
> The table DDL is:
> CREATE TABLE IF NOT EXISTS TBL 
> (CURRENT_TIMESTAMP BIGINT NOT NULL, ID VARCHAR(96), CURR_EXDOC VARCHAR,  
> CURR_CHECKSUM VARCHAR(32), SUMMARY VARCHAR, 
> CONSTRAINT PK PRIMARY KEY(CURRENT_TIMESTAMP, ID)) 
> BLOCKCACHE=FALSE, COMPRESSION=SNAPPY, SALT_BUCKETS=8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2513) Pherf - IllegalArgumentException during data load

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113932#comment-15113932
 ] 

James Taylor commented on PHOENIX-2513:
---

Probably needs a repro if it happens enough to worry about.

> Pherf - IllegalArgumentException during data load
> -
>
> Key: PHOENIX-2513
> URL: https://issues.apache.org/jira/browse/PHOENIX-2513
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Priority: Minor
>
> {code}
> Caused by: java.lang.IllegalArgumentException: Requested random string length 
> -1 is less than 0.
>   at 
> org.apache.commons.lang.RandomStringUtils.random(RandomStringUtils.java:231)
>   at 
> org.apache.commons.lang.RandomStringUtils.random(RandomStringUtils.java:166)
>   at 
> org.apache.commons.lang.RandomStringUtils.random(RandomStringUtils.java:146)
>   at 
> org.apache.commons.lang.RandomStringUtils.randomAlphanumeric(RandomStringUtils.java:114)
>   at 
> org.apache.phoenix.pherf.rules.RulesApplier.getSequentialDataValue(RulesApplier.java:373)
>   at 
> org.apache.phoenix.pherf.rules.RulesApplier.getDataValue(RulesApplier.java:155)
>   at 
> org.apache.phoenix.pherf.rules.RulesApplier.getDataForRule(RulesApplier.java:99)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload.buildStatement(WriteWorkload.java:317)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$700(WriteWorkload.java:53)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:268)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:249)
> {code}
> [~cody.mar...@gmail.com] Any idea for this exception that happened after 
> 100M+ rows during data load?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2509) Check-in for PHOENIX-2413 consisted of

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2509:
--
Assignee: (was: James Taylor)

> Check-in for PHOENIX-2413 consisted of 
> 
>
> Key: PHOENIX-2509
> URL: https://issues.apache.org/jira/browse/PHOENIX-2509
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Maryann Xue
>Priority: Trivial
>
> https://git1-us-west.apache.org/repos/asf?p=phoenix.git;a=commit;h=461aaa239479abb8bb35df79324d2d2c3627e0d5
> Already found in UnionPlan and ListJarsQueryPlan, not sure if there are any 
> other places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2509) Correct formatting throughout source code and enable check style

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2509:
--
Summary: Correct formatting throughout source code and enable check style  
(was: Check-in for PHOENIX-2413 consisted of )

> Correct formatting throughout source code and enable check style
> 
>
> Key: PHOENIX-2509
> URL: https://issues.apache.org/jira/browse/PHOENIX-2509
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Maryann Xue
>Priority: Trivial
>
> https://git1-us-west.apache.org/repos/asf?p=phoenix.git;a=commit;h=461aaa239479abb8bb35df79324d2d2c3627e0d5
> Already found in UnionPlan and ListJarsQueryPlan, not sure if there are any 
> other places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2501) BatchUpdateExecution typo in name, should extend java.sql.BatchUpdateException

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2501:
--
Fix Version/s: 4.8.0

> BatchUpdateExecution typo in name, should extend java.sql.BatchUpdateException
> --
>
> Key: PHOENIX-2501
> URL: https://issues.apache.org/jira/browse/PHOENIX-2501
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
> Fix For: 4.8.0
>
>
> Notice this when my autocomplete went crazy. I think "BatchUpdateExecution" 
> was intended to be "BatchUpdateException". Further, java provides a 
> {{java.sql.BatchUpdateException}}, seems like we should just use that. 
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2498) Secondary index table is not updated in bulk load

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2498.
---
Resolution: Duplicate

Duplicate of PHOENIX-2521

> Secondary index table is not updated in bulk load
> -
>
> Key: PHOENIX-2498
> URL: https://issues.apache.org/jira/browse/PHOENIX-2498
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: CentOs
>Reporter: Afshin Moazami
>  Labels: bulkloader, secondary_index
>
> When using Phoenix map reduce bulk loader to load data from a csv file to a 
> table (myTable) with a secondary index (myIndex) in schema (mySchema), if I 
> use 
> {code} -table mySchema.myTable {code}
> data will load only to the myTable, not myIndex.
> But, both will be loaded if I use:
> {code} -schema mySchema -table myTable {code}
> I am not sure if it is a bug or feature, but it is not documented anywhere 
> (or at lease I couldn't find it)
> As a result of the first usage (where index is not loaded), we can have weird 
> scenarios like
> {code:xml} select /*+ INDEX(mySchema.myTable myIndex) */* from myTable where 
> myColumn  = 'myValue'; {code}
> and it returns a row that myColumn is not equal to myValue, because where 
> clause is validated against the index and the data will be returned from the 
> main table ( I guess) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2493) ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2493:
--
Assignee: Samarth Jain

> ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type
> ---
>
> Key: PHOENIX-2493
> URL: https://issues.apache.org/jira/browse/PHOENIX-2493
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
>Reporter: Pierre Lacave
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
>
> Hi,
> Using the ROW_TIMESTAMP feature on an UNSIGNED_LONG column doesn't work on 
> 4.6.
> It does work as expected if the column type is BIGINT however.
> Thanks
> {noformat}
> 0: jdbc:phoenix:hadoop1-dc:2181:/hbase> CREATE TABLE TEST (t UNSIGNED_LONG 
> NOT NULL CONSTRAINT pk PRIMARY KEY (t ROW_TIMESTAMP) );
> No rows affected (1.654 seconds)
> 0: jdbc:phoenix:hadoop1-dc:2181:/hbase> UPSERT INTO TEST (t) VALUES 
> (14491610811);
> Error: ERROR 201 (22000): Illegal data. Value of a column designated as 
> ROW_TIMESTAMP cannot be less than zero (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Value of a column 
> designated as ROW_TIMESTAMP cannot be less than zero
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
> at 
> org.apache.phoenix.schema.IllegalDataException.(IllegalDataException.java:38)
> at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:135)
> at 
> org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:114)
> at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:882)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:312)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2493) ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2493:
--
Fix Version/s: 4.8.0

> ROW_TIMESTAMP mapping not functional with UNSIGNED_LONG column type
> ---
>
> Key: PHOENIX-2493
> URL: https://issues.apache.org/jira/browse/PHOENIX-2493
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
>Reporter: Pierre Lacave
> Fix For: 4.8.0
>
>
> Hi,
> Using the ROW_TIMESTAMP feature on an UNSIGNED_LONG column doesn't work on 
> 4.6.
> It does work as expected if the column type is BIGINT however.
> Thanks
> {noformat}
> 0: jdbc:phoenix:hadoop1-dc:2181:/hbase> CREATE TABLE TEST (t UNSIGNED_LONG 
> NOT NULL CONSTRAINT pk PRIMARY KEY (t ROW_TIMESTAMP) );
> No rows affected (1.654 seconds)
> 0: jdbc:phoenix:hadoop1-dc:2181:/hbase> UPSERT INTO TEST (t) VALUES 
> (14491610811);
> Error: ERROR 201 (22000): Illegal data. Value of a column designated as 
> ROW_TIMESTAMP cannot be less than zero (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Value of a column 
> designated as ROW_TIMESTAMP cannot be less than zero
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:396)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
> at 
> org.apache.phoenix.schema.IllegalDataException.(IllegalDataException.java:38)
> at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:135)
> at 
> org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:114)
> at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:882)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:312)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2474) Cannot round to a negative precision (to the left of the decimal)

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2474:
--
Labels: function newbie phoenix  (was: function phoenix)

> Cannot round to a negative precision (to the left of the decimal)
> -
>
> Key: PHOENIX-2474
> URL: https://issues.apache.org/jira/browse/PHOENIX-2474
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Kevin Liew
>  Labels: function, newbie, phoenix
>
> Query:
> {noformat}select ROUND(444.44, -2){noformat}
> Expected result:
> {noformat}400{noformat}
> Actual result:
> {noformat}444.44{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2463) Use PhoenixTestDriver for all tests

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2463.
---
Resolution: Won't Fix

There are a few tests where we can't do this, but most of them use the test 
driver now.

> Use PhoenixTestDriver for all tests
> ---
>
> Key: PHOENIX-2463
> URL: https://issues.apache.org/jira/browse/PHOENIX-2463
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Attachments: PHOENIX-2463.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2456) StaleRegionBoundaryCacheException on query with stats

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2456:
--
Assignee: Mujtaba Chohan

> StaleRegionBoundaryCacheException on query with stats
> -
>
> Key: PHOENIX-2456
> URL: https://issues.apache.org/jira/browse/PHOENIX-2456
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Minor
>
> {code}org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.{code}
> Got this exception after data load and is persistent even after client 
> restart and no split activity on server. However query works fine after stats 
> table is truncated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2404) Create builder to construct PTableImpl

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2404:
--
Assignee: (was: Thomas D'Silva)

> Create builder to construct PTableImpl 
> ---
>
> Key: PHOENIX-2404
> URL: https://issues.apache.org/jira/browse/PHOENIX-2404
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Priority: Minor
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2404) Create builder to construct PTableImpl

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2404:
--
Labels: newbie  (was: )

> Create builder to construct PTableImpl 
> ---
>
> Key: PHOENIX-2404
> URL: https://issues.apache.org/jira/browse/PHOENIX-2404
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Priority: Minor
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2394) Push TTL to index tables when updated on data tables

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2394.
---
   Resolution: Fixed
 Assignee: James Taylor
Fix Version/s: 4.7.0

> Push TTL to index tables when updated on data tables
> 
>
> Key: PHOENIX-2394
> URL: https://issues.apache.org/jira/browse/PHOENIX-2394
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.7.0
>
>
> We should push TTL changes to index tables when updated on data tables, as I 
> don't think it makes sense for them to diverge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2349) SortOrderExpressionTest.toChar() is failing in different locale

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2349:
--
Fix Version/s: 4.8.0

> SortOrderExpressionTest.toChar() is failing in different locale
> ---
>
> Key: PHOENIX-2349
> URL: https://issues.apache.org/jira/browse/PHOENIX-2349
> Project: Phoenix
>  Issue Type: Test
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 4.8.0
>
>
> For example in ko.kr,
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.025 sec 
> <<< FAILURE! - in org.apache.phoenix.expression.SortOrderExpressionTest
> toChar(org.apache.phoenix.expression.SortOrderExpressionTest)  Time elapsed: 
> 0.007 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<12/11/01 12:00 [AM]> but was:<12/11/01 
> 12:00 [오전]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.phoenix.expression.SortOrderExpressionTest.evaluateAndAssertResult(SortOrderExpressionTest.java:322)
>   at 
> org.apache.phoenix.expression.SortOrderExpressionTest.evaluateAndAssertResult(SortOrderExpressionTest.java:312)
>   at 
> org.apache.phoenix.expression.SortOrderExpressionTest.toChar(SortOrderExpressionTest.java:149)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2350) Support query larger than 8192 via query server

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2350:
--
Assignee: Josh Elser

> Support query larger than 8192 via query server
> ---
>
> Key: PHOENIX-2350
> URL: https://issues.apache.org/jira/browse/PHOENIX-2350
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Roman Rogozhnikov
>Assignee: Josh Elser
>Priority: Critical
>  Labels: queryserver
>
> I wish to configure requestHeaderSize property for query server to sent 
> larger queries.
> Сurrently query server log warning and don't execute the query: 
>2015-10-22 12:59:49,027 WARN org.eclipse.jetty.http.HttpParser: Header is 
> too large >8192
> I found the solution for normal jetty server 
> (http://stackoverflow.com/a/25758901/5406473), but i don't understand how can 
> I configure it in query server...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2338) Couple of little tweaks in "Phoenix in 15 minutes or less"

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2338:
--
Fix Version/s: 4.8.0

> Couple of little tweaks in "Phoenix in 15 minutes or less"
> --
>
> Key: PHOENIX-2338
> URL: https://issues.apache.org/jira/browse/PHOENIX-2338
> Project: Phoenix
>  Issue Type: Bug
> Environment: On the website.
>Reporter: James Stanier
>Assignee: James Stanier
>Priority: Trivial
>  Labels: documentation
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2338.patch
>
>
> There's a couple of little things I'd like to fix in the "Phoenix in 15 
> minutes or less" page, based on my experience of running through the 
> instructions myself. Just wanted to register them before I put in a patch...
> 1. When copying and pasting the us_population.sql queries, the 
> Microsoft-style smart quotes lead to parsing errors when running with 
> psql.py: 
> {code}
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Unexpected char: '“'
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at org.apache.phoenix.parse.SQLParser.nextStatement(SQLParser.java:98)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.nextStatement(PhoenixStatement.java:1278)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.(PhoenixPreparedStatement.java:84)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:312)
>   at 
> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:277)
>   at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:222)
> Caused by: java.lang.RuntimeException: Unexpected char: '“'
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mOTHER(PhoenixSQLLexer.java:4169)
>   at 
> org.apache.phoenix.parse.PhoenixSQLLexer.mTokens(PhoenixSQLLexer.java:5226)
>   at org.antlr.runtime.Lexer.nextToken(Lexer.java:85)
>   at 
> org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:143)
>   at 
> org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:137)
>   at 
> org.antlr.runtime.CommonTokenStream.consume(CommonTokenStream.java:71)
>   at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:106)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.parseAlias(PhoenixSQLParser.java:6106)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.selectable(PhoenixSQLParser.java:5223)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.select_list(PhoenixSQLParser.java:5050)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4315)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4432)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4497)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:765)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.nextStatement(PhoenixSQLParser.java:450)
>   at org.apache.phoenix.parse.SQLParser.nextStatement(SQLParser.java:88)
>   ... 5 more
> {code}
> 2. Similarly the CSV data provided does not have line breaks after each line, 
> which when copy and pasting it gives an error:  
> {code}
> 15/10/20 10:50:45 ERROR util.CSVCommonsLoader: Error upserting record [NY, 
> New York, 8143197 CA, Los Angeles, 3844829 IL, Chicago, 2842518 TX, Houston, 
> 2016582 PA, Philadelphia, 1463281 AZ, Phoenix, 1461575 TX, San Antonio, 
> 1256509 CA, San Diego, 1255540 TX, Dallas, 1213825 CA, San Jose, 912332 ]: 
> java.sql.SQLException: ERROR 201 (22000): Illegal data.
> {code}
> 3. Just for clarity, I'd like to change the bullet-point "copy the phoenix 
> jar into the HBase lib directory of every region server" to "copy the phoenix 
> /server/ jar into the HBase lib directory of every region server"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2308) Improve secondary index resiliency

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2308:
--
Fix Version/s: 4.8.0

> Improve secondary index resiliency
> --
>
> Key: PHOENIX-2308
> URL: https://issues.apache.org/jira/browse/PHOENIX-2308
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2290) Spark Phoenix cannot recognize Phoenix view fields

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2290:
--
Assignee: Josh Mahonin

> Spark Phoenix cannot recognize Phoenix view fields
> --
>
> Key: PHOENIX-2290
> URL: https://issues.apache.org/jira/browse/PHOENIX-2290
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.1
>Reporter: Fengdong Yu
>Assignee: Josh Mahonin
>  Labels: spark
>
> I created base table in base shell:
> {code}
> create 'test_table',  {NAME => 'cf1', VERSIONS => 1}
> put 'test_table', 'row_key_1', 'cf1:col_1', '200'
> {code}
> This is a very simple table. then create phoenix view in Phoenix shell.
> {code}
> create view "test_table" (pk varchar primary key, "cf1"."col_1" varchar)
> {code}
> then do following in Spark shell:
> {code}
> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
> "\"test_table\"",  "zkUrl" -> "localhost:2181"))
> df.registerTempTable("temp")
> {code}
> {code}
> scala> df.printSchema
> root
>  |-- PK: string (nullable = true)
>  |-- col_1: string (nullable = true)
> {code}
> sqlContext.sql("select * from temp")  --> {color:red} This does 
> work{color}
> then:
> {code}
> sqlContext.sql("select * from temp where col_1='200' ")
> {code}
> {code}
> java.lang.RuntimeException: 
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=col_1
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:125)
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:80)
>   at 
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
>   at 
> org.apache.phoenix.spark.PhoenixRDD.getPartitions(PhoenixRDD.scala:47)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
>   at scala.Option.getOrElse(Option.scala:120)
> {code}
> {color:red}
> I also tried:
> {code}
> sqlContext.sql("select * from temp where \"col_1\"='200' ")  --> EMPTY 
> result, no exception
> {code}
> {code}
> sqlContext.sql("select * from temp where \"cf1\".\"col_1\"='200' ")  --> 
> exception, cannot recognize SQL
> {code}
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2275) Add runtime support for MERGE statement in Phoenix/Calcite integration

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2275:
--
Description: Calcite supports the MERGE statement, so we should add runtime 
support for it in our calcite branch. This will likely depend on getting basic 
DML support in first (see PHOENIX-2197). See discussion here too: PHOENIX-2271  
(was: Calcite supports the MERGE statement, so we should add runtime support 
for it in our calcite branch. This will likely depend on getting basic DML 
support in first (see PHOENIX-2197).)

> Add runtime support for MERGE statement in Phoenix/Calcite integration
> --
>
> Key: PHOENIX-2275
> URL: https://issues.apache.org/jira/browse/PHOENIX-2275
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: calcite
>
> Calcite supports the MERGE statement, so we should add runtime support for it 
> in our calcite branch. This will likely depend on getting basic DML support 
> in first (see PHOENIX-2197). See discussion here too: PHOENIX-2271



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2271) Upsert - CheckAndPut like functionality

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2271.
---
Resolution: Duplicate

Discussion concluded that PHOENIX-2275 is the answer

> Upsert - CheckAndPut like functionality
> ---
>
> Key: PHOENIX-2271
> URL: https://issues.apache.org/jira/browse/PHOENIX-2271
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Babar Tareen
> Attachments: patch.diff
>
>
> The Upsert statement does not support HBase's checkAndPut api, thus making it 
> difficult to conditionally update a row. Based on the comments from 
> PHOENIX-6, I have implemented such functionality. The Upsert statement is 
> modified to support compare clause, which allows us to pass in an expression. 
> The expression is evaluated against the current record and Upsert is only 
> performed when the expression evaluates to true. More details 
> [here|https://github.com/babartareen/phoenix].
> h4. Examples
> Given that the FirstName is always set for the users, create a user record if 
> one doesn't already exist.
> {code:sql}
> UPSERT INTO User (UserId, FirstName, LastName, Phone, Address, PIN) VALUES 
> (1, 'Alice', 'A', '123 456 7890', 'Some St. in a city', 1122) COMPARE 
> FirstName IS NULL;
> {code}
> Update the phone number for UserId '1' if the FirstName is set. Given that 
> the FirstName is always set for the users, this will only update the record 
> if it already exists.
> {code:sql}
> UPSERT INTO User (UserId, Phone) VALUES (1, '987 654 3210') COMPARE FirstName 
> IS NOT NULL;
> {code}
> Update the phone number if the first name for UserId '1' starts with 'Al' and 
> last name is 'A'
> {code:sql}
> UPSERT INTO User (UserId, Phone) VALUES (1, '987 654 3210') COMPARE FirstName 
> LIKE 'Al%' AND LastName = 'A';  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2268) Pentaho BA query fails

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2268:
--
Fix Version/s: 4.8.0

> Pentaho BA query fails
> --
>
> Key: PHOENIX-2268
> URL: https://issues.apache.org/jira/browse/PHOENIX-2268
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: alex kamil
>Priority: Minor
> Fix For: 4.8.0
>
>
> this query was generated by Pentaho analyzer ("generic database" datasource), 
> looks like phoenix syntax needs to be enhanced to work with pentaho
> select "FACT"."MAIL_FROM" as "c0" from (select * from EMAIL_ENRON) as "FACT" 
> group by "FACT"."MAIL_FROM" order by CASE WHEN "FACT"."MAIL_FROM" IS NULL 
> THEN 1 ELSE 0 END, "FACT"."MAIL_FROM" ASC;
> Error: ERROR 1001 (42I01): Undefined column family. familyName=FACT.null 
> (state=42I01,code=1001)
> org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 (42I01): 
> Undefined column family. familyName=FACT.null
> at org.apache.phoenix.schema.PTableImpl.getColumnFamily(PTableImpl.java:787)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:361)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:369)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:401)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:141)
> at org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
> at 
> org.apache.phoenix.compile.GroupByCompiler.compile(GroupByCompiler.java:167)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:525)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:489)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:201)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:158)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:380)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:354)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:260)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:255)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:254)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
> Related: 
> * PHOENIX-643
> * http://jira.pentaho.com/browse/MONDRIAN-2254
> * https://blogs.apache.org/phoenix/entry/olap_with_apache_phoenix_and



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2217) Error Nested aggregate functions are not supported in the following query

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2217:
--
Summary: Error Nested aggregate functions are not supported in the 
following query  (was: Error Nested aggregate functions are not supported in 
the following quiry)

> Error Nested aggregate functions are not supported in the following query
> -
>
> Key: PHOENIX-2217
> URL: https://issues.apache.org/jira/browse/PHOENIX-2217
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Boris Furchin
>
>   SELECT
>   MAX('') ,
>   MAX(i)
> FROM
>myjunk
> To reproduce:
> create table myjunk(i integer primary key)
> upsert into myjunk values (1)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2215) Updated Phoenix Causes Schema/Data Migration that Creates Invalid Table Metadata

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113945#comment-15113945
 ] 

James Taylor commented on PHOENIX-2215:
---

If you can repro this, please let us know [~chill]. As a last resort, you can 
delete rows from the system catalog table either directly with a DELETE 
statement in Phoenix or through the HBase shell.

> Updated Phoenix Causes Schema/Data Migration that Creates Invalid Table 
> Metadata
> 
>
> Key: PHOENIX-2215
> URL: https://issues.apache.org/jira/browse/PHOENIX-2215
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Chris Hill
>
> Using Build #819 on Phoenix 4.x-HBase-0.98
> When I updated the Phoenix server and client in our Hadoop/Hbase cluster some 
> invalid table metadata was populated in the SYSETM.CATALOG.  These rows seem 
> to make it impossible to update or delete the tables as an error about 
> invalid metadata occurs.
> Here are the details.  After the updating the Phoenix jars (client and 
> server) I see the following.
> Run: !tables
> I get two rows for tables, that previously weren't there, that have no 
> TABLE_TYPE specified.  (they are the only rows like that)
> Run: SELECT * FROM SYSTEM.CATALOG;
> I get two rows only for the tables, again the TABLE_TYPE is not specified, 
> the PK_NAME is blank and have no rows with COLUMN_NAMEs or COLUMN_FAMILYs.  
> These seem to be the only rows in the table with these characteristics.
> As mentioned, the real problem that led me to file the issue, was that table 
> can not be changed. (I was trying to update it.)  If you try to delete it you 
> get the following error:
> 15/08/27 12:58:05 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: TABLE_NAME: Didn't find 
> expected key values for table row in metadata row
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1422)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11629)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3420)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3402)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Didn't find expected key values 
> for table row in metadata row
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:732)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:468)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1442)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1396)
> ... 10 more
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1614)
> at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93)
> at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:115)
> at 
>

[jira] [Updated] (PHOENIX-2183) Fix debug log line when doing secondary index WAL replay

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2183:
--
Fix Version/s: 4.8.0

> Fix debug log line when doing secondary index WAL replay
> 
>
> Key: PHOENIX-2183
> URL: https://issues.apache.org/jira/browse/PHOENIX-2183
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Yuhao Bi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2183-v2.patch, PHOENIX-2183.patch
>
>
> In Indexer#postOpen(...) we write the log in any condition.
> {code}
> LOG.info("Found some outstanding index updates that didn't succeed during"
> + " WAL replay - attempting to replay now.");
> //if we have no pending edits to complete, then we are done
> if (updates == null || updates.size() == 0) {
>   return;
> }
> {code}
> We should only write the log when we actually doing a replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2188) Overriding hbase.client.scanner.caching doesn't work

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2188.
---
Resolution: Not A Problem

> Overriding hbase.client.scanner.caching doesn't work
> 
>
> Key: PHOENIX-2188
> URL: https://issues.apache.org/jira/browse/PHOENIX-2188
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> Below is the test I wrote which demonstrates that the phoenix's override of 
> 1000 for the scanner cache size is not being used:
> {code} 
> @Test
> public void testScannerCacheSize() throws Exception {
> Connection connection = 
> DriverManager.getConnection("jdbc:phoenix:localhost:2181");
> PhoenixConnection phxConn = 
> connection.unwrap(PhoenixConnection.class);
> // check config value in query services
> 
> System.out.println(PhoenixDriver.INSTANCE.getQueryServices().getProps().get(QueryServices.SCAN_CACHE_SIZE_ATTRIB));
>
> Statement stmt = phxConn.createStatement();
> PhoenixStatement phxStmt = stmt.unwrap(PhoenixStatement.class);
> // double check the config size by looking at statement fetch size
> System.out.println(phxStmt.getFetchSize());
> 
> }
> {code} 
> The offending code snippet is:
> {code}
>  QueryServices.withDefaults() {
> Configuration config = 
> HBaseFactoryProvider.getConfigurationFactory().getConfiguration();
> QueryServicesOptions options = new QueryServicesOptions(config)
> .setIfUnset(STATS_USE_CURRENT_TIME_ATTRIB, 
> DEFAULT_STATS_USE_CURRENT_TIME)
> ..
> .setIfUnset(SCAN_CACHE_SIZE_ATTRIB, DEFAULT_SCAN_CACHE_SIZE)
> {code} 
> The configuration returned by 
> HBaseFactoryProvider.getConfigurationFactory().getConfiguration() has the 
> hbase.client.scanner.caching set to 100. So the override doesn't take place 
> because we are using setIfUnset.
>  
> Another override that I see that potentially won't work in future if HBase 
> provides its own default is the RpcControllerFactory - 
> hbase.rpc.controllerfactory.class because of
> {code}
> setIfUnset(RpcControllerFactory.CUSTOM_CONTROLLER_CONF_KEY, 
> DEFAULT_CLIENT_RPC_CONTROLLER_FACTORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2180) Measure performance of querying via the query server

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113949#comment-15113949
 ] 

James Taylor commented on PHOENIX-2180:
---

[~elserj] - I think you did this, no? Maybe link from here and close this?

> Measure performance of querying via the query server
> 
>
> Key: PHOENIX-2180
> URL: https://issues.apache.org/jira/browse/PHOENIX-2180
> Project: Phoenix
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Josh Elser
>Priority: Minor
>
> Would be useful to get a sense of when/where/if dropping the query server 
> into a Phoenix based architecture is viable from a performance perspective, 
> and what the gaps look like. 
> Might be a good time to do this after PHOENIX-2175



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2180) Measure performance of querying via the query server

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2180:
--
Assignee: Josh Elser

> Measure performance of querying via the query server
> 
>
> Key: PHOENIX-2180
> URL: https://issues.apache.org/jira/browse/PHOENIX-2180
> Project: Phoenix
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Josh Elser
>Priority: Minor
>
> Would be useful to get a sense of when/where/if dropping the query server 
> into a Phoenix based architecture is viable from a performance perspective, 
> and what the gaps look like. 
> Might be a good time to do this after PHOENIX-2175



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2170) WHERE condition with string expression returns incorrect results

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2170:
--
Fix Version/s: 4.8.0

> WHERE condition with string expression returns incorrect results
> 
>
> Key: PHOENIX-2170
> URL: https://issues.apache.org/jira/browse/PHOENIX-2170
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Dan Meany
> Fix For: 4.8.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> Results incorrect for string expression in where clause.
> create table TEST (ID BIGINT PRIMARY KEY, FIRST_NAME VARCHAR,LAST_NAME 
> VARCHAR);
> upsert into TEST (ID, FIRST_NAME, LAST_NAME) values (1,'Joe','Smith');
> /* incorrectly returns 0 */
> select count(*) from TEST where lower(FIRST_NAME || ' ' || LAST_NAME) = 'joe 
> smith';
> /* incorrectly returns no rows */
> select * from TEST where lower(FIRST_NAME || ' ' || LAST_NAME) = 'joe smith';
> /* correctly returns 1 */
> select count(*) from TEST where lower(FIRST_NAME || ' ' ) = 'joe ';
> select count(*) from TEST where lower(' ' || LAST_NAME) = ' smith';
> select count(*) from TEST where lower(FIRST_NAME || ' ' || ' ') = 'joe  ';
> /* correctly returns 'joe smith' */
> select LOWER(FIRST_NAME||' '||LAST_NAME) from TEST;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2169) Illegal data error on UPSERT SELECT and JOIN with salted tables

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2169:
--
Fix Version/s: 4.8.0

> Illegal data error on UPSERT SELECT and JOIN with salted tables
> ---
>
> Key: PHOENIX-2169
> URL: https://issues.apache.org/jira/browse/PHOENIX-2169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Josh Mahonin
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2169-bug.patch
>
>
> I have an issue where I get periodic failures (~50%) for an UPSERT SELECT 
> query involving a JOIN on salted tables. Unfortunately I haven't been able to 
> create a reproducible test case yet, though I'll keep trying. I believe this 
> same behaviour existed in 4.3.1 as well, so I don't think it's a regression.
> The upsert query itself looks something like this:
> {code}
> UPSERT INTO a(tid, ds, etp, eid, ts, atp, rel, tp, tpid, dt, pro) 
> SELECT c.tid, 
>c.ds, 
>c.etp, 
>c.eid, 
>c.dh, 
>0, 
>c.rel, 
>c.tp, 
>c.tpid, 
>current_time(), 
>1.0 / s.th 
> FROM   e_c c 
> join   e_s s 
> ON s.tid = c.tid 
> ANDs.ds = c.ds 
> ANDs.etp = c.etp 
> ANDs.eid = c.eid 
> WHERE  c.tid = 'FOO';
> {code}
> Without the upsert, the query always returns the right data, but with the 
> upsert, it ends up with failures like:
> Error: ERROR 201 (22000): Illegal data. ERROR 201 (22000): Illegal data. 
> Expected length of at least 109 bytes, but had 19 (state=22000,code=201)
> The explain plan looks like:
> {code}
> UPSERT SELECT
> CLIENT 16-CHUNK PARALLEL 16-WAY RANGE SCAN OVER E_C [0,'FOO']
>   SERVER FILTER BY FIRST KEY ONLY
>   PARALLEL INNER-JOIN TABLE 0
>   CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER E_S
>   DYNAMIC SERVER FILTER BY (C.TID, C.DS, C.ETP, C.EID) IN ((S.TID, S.DS, 
> S.ETP, S.EID))
> {code}
> I'm using SALT_BUCKETS=16 for both tables in the join, and this is a dev 
> environment, so only 1 region server. Note that without salted tables, I have 
> no issue with this query.
> The number of rows in E_C is around 23K, and the number of rows in E_S is 62.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2163) Measure performance of Phoenix/Calcite querying

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2163:
--
Assignee: Maryann Xue  (was: Shuxiong Ye)

> Measure performance of Phoenix/Calcite querying
> ---
>
> Key: PHOENIX-2163
> URL: https://issues.apache.org/jira/browse/PHOENIX-2163
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Maryann Xue
>  Labels: calcite
> Attachments: PHOENIX-2163.patch, PhoenixRegressor.log, 
> calcite-test-mac.tar.gz, hbase-logs.7167262.tar.gz, publish.7167262.tar.gz
>
>
> The work to integrate Phoenix with Calcite has come along far enough that 
> queries both against the data table and through a secondary index is 
> functional. As a checkpoint, we should compare performance of as many queries 
> as possible in our regression suite for the calcite branch against the latest 
> Phoenix release (4.5.0). The runtime of these two systems should be the same, 
> so this will give us an idea of the overhead of query parsing and compilation 
> for Calcite. This is super important, as it'll identify outstanding work 
> that'll be necessary to do prior to any releases on top of this new stack.
> Source code of regression suite is at 
> https://github.com/mujtabachohan/PhoenixRegressor
> Connection string location: 
> https://github.com/mujtabachohan/PhoenixRegressor/blob/master/src/main/resources/settings.json
> Instructions on how to compile and run: 
> https://github.com/mujtabachohan/PhoenixRegressor/blob/master/README.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2169) Illegal data error on UPSERT SELECT and JOIN with salted tables

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2169:
--
Assignee: Josh Mahonin

> Illegal data error on UPSERT SELECT and JOIN with salted tables
> ---
>
> Key: PHOENIX-2169
> URL: https://issues.apache.org/jira/browse/PHOENIX-2169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Josh Mahonin
>Assignee: Josh Mahonin
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2169-bug.patch
>
>
> I have an issue where I get periodic failures (~50%) for an UPSERT SELECT 
> query involving a JOIN on salted tables. Unfortunately I haven't been able to 
> create a reproducible test case yet, though I'll keep trying. I believe this 
> same behaviour existed in 4.3.1 as well, so I don't think it's a regression.
> The upsert query itself looks something like this:
> {code}
> UPSERT INTO a(tid, ds, etp, eid, ts, atp, rel, tp, tpid, dt, pro) 
> SELECT c.tid, 
>c.ds, 
>c.etp, 
>c.eid, 
>c.dh, 
>0, 
>c.rel, 
>c.tp, 
>c.tpid, 
>current_time(), 
>1.0 / s.th 
> FROM   e_c c 
> join   e_s s 
> ON s.tid = c.tid 
> ANDs.ds = c.ds 
> ANDs.etp = c.etp 
> ANDs.eid = c.eid 
> WHERE  c.tid = 'FOO';
> {code}
> Without the upsert, the query always returns the right data, but with the 
> upsert, it ends up with failures like:
> Error: ERROR 201 (22000): Illegal data. ERROR 201 (22000): Illegal data. 
> Expected length of at least 109 bytes, but had 19 (state=22000,code=201)
> The explain plan looks like:
> {code}
> UPSERT SELECT
> CLIENT 16-CHUNK PARALLEL 16-WAY RANGE SCAN OVER E_C [0,'FOO']
>   SERVER FILTER BY FIRST KEY ONLY
>   PARALLEL INNER-JOIN TABLE 0
>   CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER E_S
>   DYNAMIC SERVER FILTER BY (C.TID, C.DS, C.ETP, C.EID) IN ((S.TID, S.DS, 
> S.ETP, S.EID))
> {code}
> I'm using SALT_BUCKETS=16 for both tables in the join, and this is a dev 
> environment, so only 1 region server. Note that without salted tables, I have 
> no issue with this query.
> The number of rows in E_C is around 23K, and the number of rows in E_S is 62.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2163) Measure performance of Phoenix/Calcite querying

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113950#comment-15113950
 ] 

James Taylor commented on PHOENIX-2163:
---

Can this be closed now, [~maryannxue]?

> Measure performance of Phoenix/Calcite querying
> ---
>
> Key: PHOENIX-2163
> URL: https://issues.apache.org/jira/browse/PHOENIX-2163
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Maryann Xue
>  Labels: calcite
> Attachments: PHOENIX-2163.patch, PhoenixRegressor.log, 
> calcite-test-mac.tar.gz, hbase-logs.7167262.tar.gz, publish.7167262.tar.gz
>
>
> The work to integrate Phoenix with Calcite has come along far enough that 
> queries both against the data table and through a secondary index is 
> functional. As a checkpoint, we should compare performance of as many queries 
> as possible in our regression suite for the calcite branch against the latest 
> Phoenix release (4.5.0). The runtime of these two systems should be the same, 
> so this will give us an idea of the overhead of query parsing and compilation 
> for Calcite. This is super important, as it'll identify outstanding work 
> that'll be necessary to do prior to any releases on top of this new stack.
> Source code of regression suite is at 
> https://github.com/mujtabachohan/PhoenixRegressor
> Connection string location: 
> https://github.com/mujtabachohan/PhoenixRegressor/blob/master/src/main/resources/settings.json
> Instructions on how to compile and run: 
> https://github.com/mujtabachohan/PhoenixRegressor/blob/master/README.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2162) Exception trying to write an ARRAY of UNSIGNED_SMALLINT

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2162:
--
Assignee: Josh Mahonin

> Exception trying to write an ARRAY of UNSIGNED_SMALLINT
> ---
>
> Key: PHOENIX-2162
> URL: https://issues.apache.org/jira/browse/PHOENIX-2162
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: - Windows 7
> - Spark 1.3.1
> - Scala 2.10.5
> - HBase 1.0.1.1
>Reporter: Riccardo Cardin
>Assignee: Josh Mahonin
>
> I am using Phoenix version 4.5.0 and the phoenix-spark plugin to write into 
> HBase an ARRAY of UNSIGNED_SMALLINT. As stated in the documentation, this 
> type is mapped to the java type java.lang.Short.
> Using the saveToPhoenix method on a RDD and passing a Scala Array of Short I 
> obtain the following stacktrace:
> {noformat}
> Caused by: java.lang.ClassCastException: [S cannot be cast to 
> [Ljava.lang.Object;
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallintArray.isCoercibleTo(PUnsignedSmallintArray.java:81)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:174)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:157)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:144)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:872)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:856)
>   at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:745)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:550)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:538)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:318)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:311)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:239)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1315)
> {noformat}
> Changing the type of the column to CHAR(1) ARRAY and use an Array of String, 
> the write operation succeds.
> I've tried to force to use an Array[java.lang.Short), to avoid mismatch 
> between Scala and Java types, but I've obtained the same error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2162) Exception trying to write an ARRAY of UNSIGNED_SMALLINT

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2162:
--
Fix Version/s: 4.8.0

> Exception trying to write an ARRAY of UNSIGNED_SMALLINT
> ---
>
> Key: PHOENIX-2162
> URL: https://issues.apache.org/jira/browse/PHOENIX-2162
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: - Windows 7
> - Spark 1.3.1
> - Scala 2.10.5
> - HBase 1.0.1.1
>Reporter: Riccardo Cardin
>Assignee: Josh Mahonin
> Fix For: 4.8.0
>
>
> I am using Phoenix version 4.5.0 and the phoenix-spark plugin to write into 
> HBase an ARRAY of UNSIGNED_SMALLINT. As stated in the documentation, this 
> type is mapped to the java type java.lang.Short.
> Using the saveToPhoenix method on a RDD and passing a Scala Array of Short I 
> obtain the following stacktrace:
> {noformat}
> Caused by: java.lang.ClassCastException: [S cannot be cast to 
> [Ljava.lang.Object;
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallintArray.isCoercibleTo(PUnsignedSmallintArray.java:81)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:174)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:157)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:144)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:872)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:856)
>   at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:745)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:550)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:538)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:318)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:311)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:239)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1315)
> {noformat}
> Changing the type of the column to CHAR(1) ARRAY and use an Array of String, 
> the write operation succeds.
> I've tried to force to use an Array[java.lang.Short), to avoid mismatch 
> between Scala and Java types, but I've obtained the same error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2158) Implement position/substring/trim build-in function for BINARY VARBINARY

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2158:
--
Assignee: (was: Shuxiong Ye)

> Implement position/substring/trim build-in function for BINARY VARBINARY
> 
>
> Key: PHOENIX-2158
> URL: https://issues.apache.org/jira/browse/PHOENIX-2158
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Shuxiong Ye
>
> See PHOENIX-1664
> We will have these functions, but with individual arguments, and below will 
> be the explanation about the function.
> position(string, substring): Location of specified substring. Note that 
> result starts from 1. e.g. position('1', '123') will be 1, and 0 will 
> indicates subtring is not found in string.
> trimb(string, bytes): Remove the longest string containing only the bytes in 
> bytes from the start and end of string
> substr(string, startInt[, lengthInt]): substr for BINARY



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2153) Fix a couple of Null pointer dereferences

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2153:
--
Fix Version/s: 4.8.0

> Fix a couple of Null pointer dereferences
> -
>
> Key: PHOENIX-2153
> URL: https://issues.apache.org/jira/browse/PHOENIX-2153
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2153.patch
>
>
> New Defects reported by Coverity Scan for Apache Phoenix
> CID 98770: null pointer dereferences FORWARD_NULL)
> /phoenix/core/src/main/java/org/pache/phoenix/expression/InListExession.java: 
> 90 in org.apache.phoenix.expression.InListExpression.create(java.util.List, 
> boolean, org.apache.hado.hbase.io.ImmutableBytesWritable, boolean)()
> CID 98771:  Null pointer derefrences  (FORWARD_NULL)
> /phoenix-rf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java: 112 
> in
> org.apache.phoenix.pherf.util.PhoenixUtil.cuteStatementThrowException(java.lang.String,
>  java.sql.Connection)()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2153) Fix a couple of Null pointer dereferences

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2153:
--
Assignee: Rajeshbabu Chintaguntla  (was: Alicia Ying Shu)

> Fix a couple of Null pointer dereferences
> -
>
> Key: PHOENIX-2153
> URL: https://issues.apache.org/jira/browse/PHOENIX-2153
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2153.patch
>
>
> New Defects reported by Coverity Scan for Apache Phoenix
> CID 98770: null pointer dereferences FORWARD_NULL)
> /phoenix/core/src/main/java/org/pache/phoenix/expression/InListExession.java: 
> 90 in org.apache.phoenix.expression.InListExpression.create(java.util.List, 
> boolean, org.apache.hado.hbase.io.ImmutableBytesWritable, boolean)()
> CID 98771:  Null pointer derefrences  (FORWARD_NULL)
> /phoenix-rf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java: 112 
> in
> org.apache.phoenix.pherf.util.PhoenixUtil.cuteStatementThrowException(java.lang.String,
>  java.sql.Connection)()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2153) Fix a couple of Null pointer dereferences

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113952#comment-15113952
 ] 

James Taylor commented on PHOENIX-2153:
---

Is this still relevant, [~rajeshbabu] & [~ayingshu]? If not, please close. If 
yes, please review and commit.

> Fix a couple of Null pointer dereferences
> -
>
> Key: PHOENIX-2153
> URL: https://issues.apache.org/jira/browse/PHOENIX-2153
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2153.patch
>
>
> New Defects reported by Coverity Scan for Apache Phoenix
> CID 98770: null pointer dereferences FORWARD_NULL)
> /phoenix/core/src/main/java/org/pache/phoenix/expression/InListExession.java: 
> 90 in org.apache.phoenix.expression.InListExpression.create(java.util.List, 
> boolean, org.apache.hado.hbase.io.ImmutableBytesWritable, boolean)()
> CID 98771:  Null pointer derefrences  (FORWARD_NULL)
> /phoenix-rf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java: 112 
> in
> org.apache.phoenix.pherf.util.PhoenixUtil.cuteStatementThrowException(java.lang.String,
>  java.sql.Connection)()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2136) Unable to configure thread pool size used in Embeded driver.

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2136.
---
Resolution: Invalid

Please ask questions on the dev or user mailing lists.

> Unable to configure thread pool size used in Embeded driver.
> 
>
> Key: PHOENIX-2136
> URL: https://issues.apache.org/jira/browse/PHOENIX-2136
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
>Reporter: Dhirendra Kumar Singh
>  Labels: jdbc
>
> We're using phoenix JDBC driver for propagating our changes to HBase. Changes 
> going in are in order of millions per second. Phoenix being a Embedded driver 
> so using a Pooled connection is out of question. While going throught the 
> phoenix Driver code we saw that the pool size used by executor in 
> QueryServicesImpl is using a default value 128. There isn't a way by which we 
> can override this value as here in QueryServicesImpl we are 
> super(defaultProps, QueryServicesOptions.withDefaults()) which would always 
> pick 128 as default value.
> Before we go ahead, and fork to make changes for our use cases I wanted to 
> understand what is the motivation behind this limiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2130) Can't connct to hbase cluster

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113955#comment-15113955
 ] 

James Taylor commented on PHOENIX-2130:
---

Is this still an issue, [~Beryl]? If so, we'll need more info and a clear way 
to repro.

> Can't connct to hbase cluster
> -
>
> Key: PHOENIX-2130
> URL: https://issues.apache.org/jira/browse/PHOENIX-2130
> Project: Phoenix
>  Issue Type: Bug
> Environment: ubuntu 14.0
>Reporter: BerylLin
>
> I have a hadoop cluster which have 6 nodes, hadoop version is 2.2.0.
> Zookeeper cluster are installed in 
> datanode1,datanode2,datanode3,datanode4,datanode5.
> Hbase cluster is installed in that environment above, version is 0.98.13.
> Hbase can be started and used successfully.
> Phoenix version is 4.3.0(4.4.0 has also been tried)
> When I use "sqlline.py datanode1:2181", I got the error below:
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:datanode1:2181 none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:datanode1:2181
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/07/18 20:55:39 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:870)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1194)
>   at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1682)
>

[jira] [Updated] (PHOENIX-2100) Indicate in explain plan when round robin iterator is being used

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2100:
--
Fix Version/s: 4.8.0

> Indicate in explain plan when round robin iterator is being used
> 
>
> Key: PHOENIX-2100
> URL: https://issues.apache.org/jira/browse/PHOENIX-2100
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
>
> I don't think a user has any idea when the round robin iterator will be in 
> use for a query. We should have an indication in the explain plan for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2153) Fix a couple of Null pointer dereferences

2016-01-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113957#comment-15113957
 ] 

Hadoop QA commented on PHOENIX-2153:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747844/PHOENIX-2153.patch
  against master branch at commit 551cc7db93a8a2c3cc9ff15e7cf9425e311ab125.
  ATTACHMENT ID: 12747844

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/235//console

This message is automatically generated.

> Fix a couple of Null pointer dereferences
> -
>
> Key: PHOENIX-2153
> URL: https://issues.apache.org/jira/browse/PHOENIX-2153
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2153.patch
>
>
> New Defects reported by Coverity Scan for Apache Phoenix
> CID 98770: null pointer dereferences FORWARD_NULL)
> /phoenix/core/src/main/java/org/pache/phoenix/expression/InListExession.java: 
> 90 in org.apache.phoenix.expression.InListExpression.create(java.util.List, 
> boolean, org.apache.hado.hbase.io.ImmutableBytesWritable, boolean)()
> CID 98771:  Null pointer derefrences  (FORWARD_NULL)
> /phoenix-rf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java: 112 
> in
> org.apache.phoenix.pherf.util.PhoenixUtil.cuteStatementThrowException(java.lang.String,
>  java.sql.Connection)()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2094) Query hint ignored for functional index

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2094:
--
Assignee: Thomas D'Silva

> Query hint ignored for functional index
> ---
>
> Key: PHOENIX-2094
> URL: https://issues.apache.org/jira/browse/PHOENIX-2094
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bryan Gerber
>Assignee: Thomas D'Silva
>
> Hints are not being used for functional index.
>  
> EXPLAIN SELECT /*+ INDEX(LOG LOG_LOWER_REQUEST_IDX) */ * FROM LOG WHERE 
> LOWER(RQ) LIKE '/jquery%';
> +--+
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER LOG |
> | SERVER FILTER BY LOWER(RQ) LIKE '/jquery%' |
> +--+
>  
> Test table has 2.9 million records; production table is many orders of 
> magnitude larger. 
> Here’s a simplified schema for the test table:
> CREATE TABLE IF NOT EXISTS LOG
> (
> TS VARCHAR NOT NULL,
> f VARCHAR NOT NULL,
> r INTEGER NOT NULL,
> sa VARCHAR,
> da VARCHAR,
> rq VARCHAR
> CONSTRAINT pkey PRIMARY KEY (TS, f, r)
> ) 
> TTL='5616000',KEEP_DELETED_CELLS='false',IMMUTABLE_ROWS=true,COMPRESSION='SNAPPY',SALT_BUCKETS=40,MAX_FILESIZE='100',SPLIT_POLICY='org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy';
>  
> CREATE INDEX IF NOT EXISTS LOG_LOWER_REQUEST_IDX  ON LOG(LOWER(rq)) 
> TTL='5616000',KEEP_DELETED_CELLS='false',COMPRESSION='SNAPPY',MAX_FILESIZE='100',SPLIT_POLICY='org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy';
> CREATE INDEX IF NOT EXISTS LOG_REQUEST_IDX  ON LOG(rq) 
> TTL='5616000',KEEP_DELETED_CELLS='false',COMPRESSION='SNAPPY',MAX_FILESIZE='100',SPLIT_POLICY='org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy';



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2093) Support dropping columns from a table that has views

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2093.
---
Resolution: Duplicate

Duplicate of PHOENIX-2156

> Support dropping columns from a table that has views
> 
>
> Key: PHOENIX-2093
> URL: https://issues.apache.org/jira/browse/PHOENIX-2093
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> With the implementation of PHOENIX-1504, columns may be added to a table that 
> has views. However, we still don't support dropping columns from a table that 
> has views.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2092) Support read-your-own-writes semantics without sending updates to server

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2092:
--
Summary: Support read-your-own-writes semantics without sending updates to 
server  (was: [BRAINSTORMING] Support read-your-own-writes semantics without 
sending updates to server)

> Support read-your-own-writes semantics without sending updates to server
> 
>
> Key: PHOENIX-2092
> URL: https://issues.apache.org/jira/browse/PHOENIX-2092
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Our current transaction integration sends uncommitted data to the HBase 
> server when a client attempts to read on a connection with uncommitted data. 
> Instead, we could (in theory) keep the data on the client and treat these 
> local edits as a kind of separate region which would get merged into the 
> results of any queries. Unclear how many cases would be handled, though:
> - partially aggregated results would need to be adjusted (i.e. subtract the 
> sum from an overridden value and add the sum from the new value)
> - secondary index usage - the client would need to know the rows to delete 
> and the rows to add to the index table
> - deleted rows
> Parking this here for now as food for thought.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2183) Fix debug log line when doing secondary index WAL replay

2016-01-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113970#comment-15113970
 ] 

Hadoop QA commented on PHOENIX-2183:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12750941/PHOENIX-2183-v2.patch
  against master branch at commit 551cc7db93a8a2c3cc9ff15e7cf9425e311ab125.
  ATTACHMENT ID: 12750941

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
33 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/234//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/234//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/234//console

This message is automatically generated.

> Fix debug log line when doing secondary index WAL replay
> 
>
> Key: PHOENIX-2183
> URL: https://issues.apache.org/jira/browse/PHOENIX-2183
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Yuhao Bi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2183-v2.patch, PHOENIX-2183.patch
>
>
> In Indexer#postOpen(...) we write the log in any condition.
> {code}
> LOG.info("Found some outstanding index updates that didn't succeed during"
> + " WAL replay - attempting to replay now.");
> //if we have no pending edits to complete, then we are done
> if (updates == null || updates.size() == 0) {
>   return;
> }
> {code}
> We should only write the log when we actually doing a replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2090) Refine PhoenixTableScan.computeSelfCost() when scanRanges is available

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2090:
--
Assignee: Maryann Xue

> Refine PhoenixTableScan.computeSelfCost() when scanRanges is available
> --
>
> Key: PHOENIX-2090
> URL: https://issues.apache.org/jira/browse/PHOENIX-2090
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> We should compute a more accurate cost based on the "scanRanges" so that we 
> can better choose among these different indices.
> For example, if we have more than one indices concerning different index 
> keys, for example IDX1 is indexed on column a, and IDX2 is indexed on column 
> b, and our query is like "select x, y, z where a between 'A1' and 'A2' and b 
> between 'B3' and 'B4'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2083) Pig maps splits are very unevent

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113977#comment-15113977
 ] 

James Taylor commented on PHOENIX-2083:
---

[~br...@brianjohnson.cc] - would you mind trying this with our 4.7.0 RC? Make 
sure you run UPDATE STATISTICS on your table (or force a major compaction) 
before running.

> Pig maps splits are very unevent
> 
>
> Key: PHOENIX-2083
> URL: https://issues.apache.org/jira/browse/PHOENIX-2083
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
>Reporter: Brian Johnson
> Fix For: 4.8.0
>
>
> When running a pig job on MR with the Phoenix loader we got about 75 maps 
> tasks, but there was huge amount of skew in how the records were allocated 
> and the vast majority of them went to about 20 mappers and 5 got nothing at 
> all. 
> Task
> Value
> task_1433431098673_66646_m_42 0
> task_1433431098673_66646_m_57 0
> task_1433431098673_66646_m_61 0
> task_1433431098673_66646_m_67 0
> task_1433431098673_66646_r_00 0
> task_1433431098673_66646_m_31 127242
> task_1433431098673_66646_m_26 130669
> task_1433431098673_66646_m_17 179685
> task_1433431098673_66646_m_68 190741
> task_1433431098673_66646_m_40 191062
> task_1433431098673_66646_m_56 191509
> task_1433431098673_66646_m_53 191518
> task_1433431098673_66646_m_60 191560
> task_1433431098673_66646_m_48 191579
> task_1433431098673_66646_m_41 191623
> task_1433431098673_66646_m_47 191686
> task_1433431098673_66646_m_65 191720
> task_1433431098673_66646_m_64 191726
> task_1433431098673_66646_m_54 191763
> task_1433431098673_66646_m_66 191871
> task_1433431098673_66646_m_52 191875
> task_1433431098673_66646_m_45 191908
> task_1433431098673_66646_m_49 191914
> task_1433431098673_66646_m_63 192124
> task_1433431098673_66646_m_58 192352
> task_1433431098673_66646_m_69 192352
> task_1433431098673_66646_m_44 192519
> task_1433431098673_66646_m_07 529769
> task_1433431098673_66646_m_18 584940
> task_1433431098673_66646_m_05 585864
> task_1433431098673_66646_m_03 697683
> task_1433431098673_66646_m_16 709321
> task_1433431098673_66646_m_08 710190
> task_1433431098673_66646_m_04 710774
> task_1433431098673_66646_m_11 711818
> task_1433431098673_66646_m_38 713862
> task_1433431098673_66646_m_37 714577
> task_1433431098673_66646_m_22 716796
> task_1433431098673_66646_m_14 717478
> task_1433431098673_66646_m_25 722809
> task_1433431098673_66646_m_30 723182
> task_1433431098673_66646_m_24 723378
> task_1433431098673_66646_m_13 731836
> task_1433431098673_66646_m_10 732525
> task_1433431098673_66646_m_01 734611
> task_1433431098673_66646_m_36 739874
> task_1433431098673_66646_m_72 1810925
> task_1433431098673_66646_m_39 1923212
> task_1433431098673_66646_m_59 2014210
> task_1433431098673_66646_m_55 2287499
> task_1433431098673_66646_m_74 2887750
> task_1433431098673_66646_m_73 3049942
> task_1433431098673_66646_m_29 3156535
> task_1433431098673_66646_m_71 3841375
> task_1433431098673_66646_m_27 4001882
> task_1433431098673_66646_m_51 4343619
> task_1433431098673_66646_m_34 5363718
> task_1433431098673_66646_m_50 7734798
> task_1433431098673_66646_m_20 9543930
> task_1433431098673_66646_m_70 10058382
> task_1433431098673_66646_m_46 10143291
> task_1433431098673_66646_m_62 10263757
> task_1433431098673_66646_m_32 10908072
> task_1433431098673_66646_m_15 11182800
> task_1433431098673_66646_m_00 11300385
> task_1433431098673_66646_m_43 11359327
> task_1433431098673_66646_m_21 12632598
> task_1433431098673_66646_m_09 14598258
> task_1433431098673_66646_m_28 14698359
> task_1433431098673_66646_m_33 16407474
> task_1433431098673_66646_m_12 17944269
> task_1433431098673_66646_m_23 20568188
> task_1433431098673_66646_m_35 21656353
> task_1433431098673_66646_m_02 27413291
> task_1433431098673_66646_m_06 35573698
> task_1433431098673_66646_m_19 35717128



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2083) Pig maps splits are very unevent

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2083:
--
Fix Version/s: 4.8.0

> Pig maps splits are very unevent
> 
>
> Key: PHOENIX-2083
> URL: https://issues.apache.org/jira/browse/PHOENIX-2083
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
>Reporter: Brian Johnson
> Fix For: 4.8.0
>
>
> When running a pig job on MR with the Phoenix loader we got about 75 maps 
> tasks, but there was huge amount of skew in how the records were allocated 
> and the vast majority of them went to about 20 mappers and 5 got nothing at 
> all. 
> Task
> Value
> task_1433431098673_66646_m_42 0
> task_1433431098673_66646_m_57 0
> task_1433431098673_66646_m_61 0
> task_1433431098673_66646_m_67 0
> task_1433431098673_66646_r_00 0
> task_1433431098673_66646_m_31 127242
> task_1433431098673_66646_m_26 130669
> task_1433431098673_66646_m_17 179685
> task_1433431098673_66646_m_68 190741
> task_1433431098673_66646_m_40 191062
> task_1433431098673_66646_m_56 191509
> task_1433431098673_66646_m_53 191518
> task_1433431098673_66646_m_60 191560
> task_1433431098673_66646_m_48 191579
> task_1433431098673_66646_m_41 191623
> task_1433431098673_66646_m_47 191686
> task_1433431098673_66646_m_65 191720
> task_1433431098673_66646_m_64 191726
> task_1433431098673_66646_m_54 191763
> task_1433431098673_66646_m_66 191871
> task_1433431098673_66646_m_52 191875
> task_1433431098673_66646_m_45 191908
> task_1433431098673_66646_m_49 191914
> task_1433431098673_66646_m_63 192124
> task_1433431098673_66646_m_58 192352
> task_1433431098673_66646_m_69 192352
> task_1433431098673_66646_m_44 192519
> task_1433431098673_66646_m_07 529769
> task_1433431098673_66646_m_18 584940
> task_1433431098673_66646_m_05 585864
> task_1433431098673_66646_m_03 697683
> task_1433431098673_66646_m_16 709321
> task_1433431098673_66646_m_08 710190
> task_1433431098673_66646_m_04 710774
> task_1433431098673_66646_m_11 711818
> task_1433431098673_66646_m_38 713862
> task_1433431098673_66646_m_37 714577
> task_1433431098673_66646_m_22 716796
> task_1433431098673_66646_m_14 717478
> task_1433431098673_66646_m_25 722809
> task_1433431098673_66646_m_30 723182
> task_1433431098673_66646_m_24 723378
> task_1433431098673_66646_m_13 731836
> task_1433431098673_66646_m_10 732525
> task_1433431098673_66646_m_01 734611
> task_1433431098673_66646_m_36 739874
> task_1433431098673_66646_m_72 1810925
> task_1433431098673_66646_m_39 1923212
> task_1433431098673_66646_m_59 2014210
> task_1433431098673_66646_m_55 2287499
> task_1433431098673_66646_m_74 2887750
> task_1433431098673_66646_m_73 3049942
> task_1433431098673_66646_m_29 3156535
> task_1433431098673_66646_m_71 3841375
> task_1433431098673_66646_m_27 4001882
> task_1433431098673_66646_m_51 4343619
> task_1433431098673_66646_m_34 5363718
> task_1433431098673_66646_m_50 7734798
> task_1433431098673_66646_m_20 9543930
> task_1433431098673_66646_m_70 10058382
> task_1433431098673_66646_m_46 10143291
> task_1433431098673_66646_m_62 10263757
> task_1433431098673_66646_m_32 10908072
> task_1433431098673_66646_m_15 11182800
> task_1433431098673_66646_m_00 11300385
> task_1433431098673_66646_m_43 11359327
> task_1433431098673_66646_m_21 12632598
> task_1433431098673_66646_m_09 14598258
> task_1433431098673_66646_m_28 14698359
> task_1433431098673_66646_m_33 16407474
> task_1433431098673_66646_m_12 17944269
> task_1433431098673_66646_m_23 20568188
> task_1433431098673_66646_m_35 21656353
> task_1433431098673_66646_m_02 27413291
> task_1433431098673_66646_m_06 35573698
> task_1433431098673_66646_m_19 35717128



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2082) Can't send UNSIGNED_TINYINT as a query parameter using query server

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2082:
--
Assignee: Josh Elser

> Can't send UNSIGNED_TINYINT as a query parameter using query server
> ---
>
> Key: PHOENIX-2082
> URL: https://issues.apache.org/jira/browse/PHOENIX-2082
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lukas Lalinsky
>Assignee: Josh Elser
>
> I'm not 100% sure if this is not a problem in my code, but it seems to me 
> that the adapter between Avatica and Phoenix doesn't know how to translate 
> int from the JSON request to UNSIGNED_TINYINT (note that TINYINT works fine).
> I have this table:
> {code}
> CREATE TABLE phoenixdb_test_tbl1 (id integer primary key, val 
> unsigned_tinyint)
> {code}
> Here is are example requests I'm sending to the query server:
> {code}
> {'connectionId': '78db88ad-a3fe-467b-81b4-671acb3e02e7',
>  'maxRowCount': -1,
>  'request': 'prepare',
>  'sql': 'UPSERT INTO phoenixdb_test_tbl1 VALUES (6, ?)'}
> {code}
> followed by:
> {code}
> {'connectionId': '78db88ad-a3fe-467b-81b4-671acb3e02e7',
>  'fetchMaxRowCount': -1,
>  'offset': 0,
>  'parameterValues': [127],
>  'request': 'fetch',
>  'statementId': 1762997996}
> {code}
> The result is this exception:
> {noformat}
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: 
> java.lang.RuntimeException: org.apache.phoenix.schema.TypeMismatchException: 
> ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 127
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:737)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.fetch(JdbcMeta.java:821)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:162)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.remote.Service$FetchRequest.accept(Service.java:314)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.remote.Service$FetchRequest.accept(Service.java:288)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.remote.JsonHandler.apply(JsonHandler.java:43)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.calcite.avatica.server.AvaticaHandler.handle(AvaticaHandler.java:55)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.server.Server.handle(Server.java:497)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:245)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> java.lang.Thread.run(Thread.java:745)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: Caused by: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. UNSIGNED_TINYINT and INTEGER for 127
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:171)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:143)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:858)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:842)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)
> Jun 28 23:07:12 vagrant-ubuntu-vivid-64 hbase[674]: at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.jav

[jira] [Updated] (PHOENIX-2081) Typo in the BIGINT range on the datatypes documentation page

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2081:
--
Fix Version/s: 4.8.0

> Typo in the BIGINT range on the datatypes documentation page
> 
>
> Key: PHOENIX-2081
> URL: https://issues.apache.org/jira/browse/PHOENIX-2081
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lukas Lalinsky
>Priority: Trivial
> Fix For: 4.8.0
>
>
> Currently it says the range for BIGINT is from -9223372036854775807 to 
> 9223372036854775807, but the range actually starts at -9223372036854775808.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2080) Phoenix driver does not through exception and hangs if Hbase is stopped.

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2080.
---
Resolution: Cannot Reproduce

This has been addressed in a previous release. Please reopen if you see this. 
It also depends on your HBase settings.

> Phoenix driver does not through exception and hangs if Hbase is stopped.  
> --
>
> Key: PHOENIX-2080
> URL: https://issues.apache.org/jira/browse/PHOENIX-2080
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
> Environment: Hbase 98.9
> Hadoop2
>Reporter: suraj misra
>
> DriverManager.getConnection(HbaseconnectionURL) hangs if Hbase is  not 
> available. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2079) databaseMetaData.getIndexInfo is not giving proper index names

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2079:
--
Fix Version/s: 4.8.0

> databaseMetaData.getIndexInfo is not giving proper index names
> --
>
> Key: PHOENIX-2079
> URL: https://issues.apache.org/jira/browse/PHOENIX-2079
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
> Environment: Hbase 98.9
> Hadoop 2 
>Reporter: suraj misra
> Fix For: 4.8.0
>
>
> Hi,
> databaseMetaData.getIndexInfo method is giving wrong index names for primary 
> keys.
> For example suppose we have a table having one primary key column and other 
> column on which index is created then above API  giving wrong index name for 
> primary key column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2077) Assert fail on TestFamilyOnlyFilter

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2077.
---
Resolution: Not A Problem

Fixed already

> Assert fail on TestFamilyOnlyFilter
> ---
>
> Key: PHOENIX-2077
> URL: https://issues.apache.org/jira/browse/PHOENIX-2077
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.4.0
>Reporter: Jun Ng
>Priority: Minor
>
> It asserted fail on TestFamilyOnlyFilter while I was trying to build Phoenix 
> and run unit tests.
> It threw some errors like, 
> Tests run: 3, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec <<< 
> FAILURE! - in 
> org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter
> testResetFilter(org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter)
>   Time elapsed: 0.004 sec  <<< FAILURE!
> java.lang.AssertionError: Didn't filter out non-matching family! 
> expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter.testResetFilter(TestFamilyOnlyFilter.java:86)
> testPassesFirstFamily(org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter)
>   Time elapsed: 0 sec  <<< FAILURE!
> java.lang.AssertionError: Didn't filter out non-matching family! 
> expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter.testPassesFirstFamily(TestFamilyOnlyFilter.java:50)
> testPassesTargetFamilyAsNonFirstFamily(org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter)
>   Time elapsed: 0 sec  <<< FAILURE!
> java.lang.AssertionError: Didn't filter out non-matching family! 
> expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.phoenix.hbase.index.covered.filter.TestFamilyOnlyFilter.testPassesTargetFamilyAsNonFirstFamily(TestFamilyOnlyFilter.java:64)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2053) The Expression IN has Bug when the params are less than zero

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2053.
---
Resolution: Not A Problem

Fixed in prior release.

> The Expression  IN  has Bug when the params are less than zero
> --
>
> Key: PHOENIX-2053
> URL: https://issues.apache.org/jira/browse/PHOENIX-2053
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: PeiLiping
>
> drop table test;
> create table test(a integer not null,b integer not null, c integer not null , 
> d integer constraint pk primary key (a,b,c) ) compression='SNAPPY' , 
> TTL=86400  split on (2,3,4,5,6,7,8,9,10);
> upsert into test values (1,-1,1,1) ;
> upsert into test values (2,-2,2,1) ;
> upsert into test values (3,-3,3,1) ;
> upsert into test values (4,-4,4,1) ;
> upsert into test values (5,-5,5,1) ;
> upsert into test values (6,-6,6,1) ;
> select c,sum(d) from test where a between 0 and 10 and b in (-2,-4,-5,-1) 
> group by c ;
> +--+--+
> |C |  SUM(D)  
> |
> +--+--+
> | 5| 1
> |
> +--+--+
> select c,sum(d) from test where a between 0 and 10 and (0-b) in (2,4,5,1) 
> group by c ;
> +--+--+
> |C |  SUM(D)  
> |
> +--+--+
> | 1| 1
> |
> | 2| 1
> |
> | 4| 1
> |
> | 5| 1
> |
> +--+--+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2052) Division Bug In Group By SQL The result turn to NULL

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2052:
--
Fix Version/s: 4.8.0

> Division Bug In Group By SQL   The result turn to NULL 
> ---
>
> Key: PHOENIX-2052
> URL: https://issues.apache.org/jira/browse/PHOENIX-2052
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
> Environment: hbase 0.98 phoenix 4.3 
>Reporter: PeiLiping
>Priority: Critical
> Fix For: 4.8.0
>
>
> when use division in group by sql   the result turn to null
> select  a , floor( b/4 ) as t  from test group by a, t
> 123  null
> 123   null
> select  a , floor( b * 0.25 ) as t  from test group by a, t
> 123  1
> 123  2 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2048:
--
Fix Version/s: 4.8.0

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jonathan Leech
>Priority: Minor
> Fix For: 4.8.0
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2046) Metadata for queries returns wrong precision and scale

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2046:
--
Fix Version/s: 4.8.0

> Metadata for queries returns wrong precision and scale
> --
>
> Key: PHOENIX-2046
> URL: https://issues.apache.org/jira/browse/PHOENIX-2046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix 4.4 and Hbase 1.1
>Reporter: Pasupathy M
> Fix For: 4.8.0
>
>
> We have a query (typical query to a sample foodmart in mondrian)
> select "time_by_day"."the_year" as "c0", "product_class"."product_family" as 
> "c1", sum("sales_fact_1997"."store_cost") as "m0" from "sales_fact_1997" as 
> "sales_fact_1997", "time_by_day" as "time_by_day", "product" as "product", 
> "product_class" as "product_class" where "time_by_day"."the_year" = 1997 and 
> "product_class"."product_family" in ('Drink', 'Food', 'Non-Consumable') and 
> "sales_fact_1997"."time_id" = "time_by_day"."time_id" and 
> "sales_fact_1997"."product_id" = "product"."product_id" and 
> "product"."product_class_id" = "product_class"."product_class_id" group by 
> "time_by_day"."the_year", "product_class"."product_family"
> While getting the metadata for this query in phoenix, 
> sum("sales_fact_1997"."store_cost") returns it as decimal with 0 precision 
> and 0 scale. The orignal store_cost column is decimal(10,4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2045) Metadata for tables that has index has null columns

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2045:
--
Fix Version/s: 4.8.0

> Metadata for tables that has index has null columns
> ---
>
> Key: PHOENIX-2045
> URL: https://issues.apache.org/jira/browse/PHOENIX-2045
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix 4.4/ Hbase 1.1
>Reporter: Pasupathy M
> Fix For: 4.8.0
>
>
> We have a table
> create table t1(x varchar primary key, y integer)
> and an index is created
> create index y_index on t1(y)
> While query phoenix metadata, it returns a null column as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2037) get java.lang.ArrayIndexOutOfBoundsException on a particular query

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2037:
--
Fix Version/s: 4.8.0

> get java.lang.ArrayIndexOutOfBoundsException on a particular query 
> ---
>
> Key: PHOENIX-2037
> URL: https://issues.apache.org/jira/browse/PHOENIX-2037
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
> Fix For: 4.8.0
>
>
> 1. PROBLEM DESCRIPTION:
> -
> get java.lang.ArrayIndexOutOfBoundsException on a particular query when 
> invoking ResultSet.next().
> The query is:
> SELECT MAX( CAST( RTRIM(T1."F02CHAR_10") AS CHAR(10) ) ) FROM SDLJUNK T1
> 2. THE EXCEPTION CALL STACK TRACE:
> --- 
> Exception: java.lang.ArrayIndexOutOfBoundsException
> java.lang.ArrayIndexOutOfBoundsException at java.lang.System.arraycopy(Native 
> Method)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:120)
>   at 
> org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:91)
>   at 
> org.apache.phoenix.expression.aggregator.Aggregators.toBytes(Aggregators.java:109)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:82)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:756)
>   at repro1.main(repro1.java:90)
> 3. THIS IS THE TEST PROGRAM:
> ---
> /*
>  * It may be freely used, modified, and distributed with no restrictions.
>  */
> import java.sql.Connection;
> import java.sql.DatabaseMetaData;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.sql.SQLException;
> import java.sql.Statement;
> import java.sql.PreparedStatement;
> import java.sql.ResultSetMetaData;
> import java.io.Reader;
> /**
>  */
> public class repro1
> {
>   /**
>* Main method.
>* 
>* @param args
>*no arguments required
>*/
>   public static void main(String [] args)
>   {
> Connection con = null;
> Statement stmt = null;
> ResultSet rst = null;
> String drptab = "DROP TABLE SDLJUNK";
> String crttab = "CREATE TABLE SDLJUNK(FA01INT INTEGER PRIMARY KEY, 
> F02CHAR_10 CHAR(10))";
> String instab = "UPSERT INTO SDLJUNK VALUES (1, 'ABC' )";
> String seltab = "SELECT MAX( CAST( RTRIM(T1.\"F02CHAR_10\") AS CHAR(10) ) 
> ) FROM SDLJUNK T1";
> try {
>   System.out.println("=");
>   System.out.println("Problem description:");
>   System.out.println("Getting java.lang.ArrayIndexOutOfBoundsException"); 
>   System.out.println("when doing a specific query."); 
>   System.out.println("Failure happens on ResultSet.next() method call."); 
>   System.out.println("=");
>   System.out.println("");
>   // Create new instance of JDBC Driver and make connection.
>   System.out.println("Registering Driver.");
>   Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
>   String url="jdbc:phoenix:cdh5:2181";
>   System.out.println("Making a connection to: "+url);
>   con = DriverManager.getConnection(url, null, null); 
>   System.out.println("Connection successful.\n");
>   try {
>  System.out.println("con.createStatement()");
>  stmt = con.createStatement();
>  System.out.println(drptab);
>  stmt.executeUpdate(drptab);
>  }
>   catch (Exception ex)
>   {   
> System.out.println("Exception: " + ex);
>   }
>   System.out.println(crttab);
>   stmt.executeUpdate(crttab);
>   System.out.println("preparing: "+instab);
>   PreparedStatement pstmt = con.prepareStatement(instab);
>   System.out.println("executing: "+instab);
>   pstmt.executeUpdate();
>   System.out.println("committing");
>   con.commit();
>   
>   System.out.println("preparing: "+seltab);
>   pstmt = con.prepareStatement(seltab);
>   System.out.println("executing: "+seltab);
>   pstmt.execute();
>   System.out.println("pstmt.getResultSet()");
>   ResultSet rs = pstmt.getResultSet();
>   if (rs != null)
>   {
> System.out.println("issuing rs.next()");
> if (rs.next())
> { 
>   System.out.println("rs.next() returned true");
> }
> else
> { 
>   

[jira] [Assigned] (PHOENIX-2028) Improve performance of write path

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-2028:
-

Assignee: James Taylor

> Improve performance of write path
> -
>
> Key: PHOENIX-2028
> URL: https://issues.apache.org/jira/browse/PHOENIX-2028
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: YARN-TLS
> Fix For: 4.8.0
>
>
> The following improvements can be made to bring the cost of UPSERT VALUES 
> more inline with direct HBase API usage:
> - don't re-compile a prepared UPSERT VALUES statement that is re-executed 
> (see patch on PHOENIX-1711).
> - change MutationState to use a List instead of a Map at the top level. It's 
> ok to have duplicate rows here, as they'll get folded together when we 
> generate the List.
> - change each mutation in the list to be a simple List. We can keep a 
> pointer to the PTable and a List of positions into the PTable columns 
> instead of maintaining a Map for each row. Again, this will get folded 
> together when we generate the List.
> - we don't need to create Mutations for 
> PhoenixRuntime.getUncommittedDataIterator() and it appears we don't need to 
> sort (though we should verify that). Instead, we'll just generate a 
> List for each row in MutationState, allowing duplicate and 
> out-of-order row keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2028) Improve performance of write path

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2028:
--
Fix Version/s: 4.8.0

> Improve performance of write path
> -
>
> Key: PHOENIX-2028
> URL: https://issues.apache.org/jira/browse/PHOENIX-2028
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: YARN-TLS
> Fix For: 4.8.0
>
>
> The following improvements can be made to bring the cost of UPSERT VALUES 
> more inline with direct HBase API usage:
> - don't re-compile a prepared UPSERT VALUES statement that is re-executed 
> (see patch on PHOENIX-1711).
> - change MutationState to use a List instead of a Map at the top level. It's 
> ok to have duplicate rows here, as they'll get folded together when we 
> generate the List.
> - change each mutation in the list to be a simple List. We can keep a 
> pointer to the PTable and a List of positions into the PTable columns 
> instead of maintaining a Map for each row. Again, this will get folded 
> together when we generate the List.
> - we don't need to create Mutations for 
> PhoenixRuntime.getUncommittedDataIterator() and it appears we don't need to 
> sort (though we should verify that). Instead, we'll just generate a 
> List for each row in MutationState, allowing duplicate and 
> out-of-order row keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2006) queryserver.py support for printing its command

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2006:
--
Fix Version/s: 4.8.0

> queryserver.py support for printing its command
> ---
>
> Key: PHOENIX-2006
> URL: https://issues.apache.org/jira/browse/PHOENIX-2006
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2006.00.patch
>
>
> {{zkServer.sh}} accepts the command {{print-cmd}}, for printing out the java 
> command it would launch. This is pretty handy! Let's reproduce it in 
> {{queryserver.py}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2023) Build tgz only on release profile

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2023:
--
Fix Version/s: 4.8.0

> Build tgz only on release profile
> -
>
> Key: PHOENIX-2023
> URL: https://issues.apache.org/jira/browse/PHOENIX-2023
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Gabor Liptak
>  Labels: beginner
> Fix For: 4.8.0
>
>
> We should follow [~enis]'s lead on HBASE-13816 and save everyone some time on 
> the build cycle by moving some (all?) of the assembly bits to a release 
> profile that's only invoked at RC time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2004) Wrong answer: SELECT MAX((CASE WHEN TRUE THEN 10 ELSE -10 END))

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2004:
--
Fix Version/s: 4.8.0

> Wrong answer:  SELECT MAX((CASE WHEN TRUE THEN 10 ELSE -10 END))
> 
>
> Key: PHOENIX-2004
> URL: https://issues.apache.org/jira/browse/PHOENIX-2004
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
> Fix For: 4.8.0
>
>
> SELECT MAX((CASE WHEN TRUE THEN 10 ELSE -10 END)) returns incorrect value 
> (returns 1 instead of 10). Original request is:
> SELECT I, (CASE WHEN TRUE THEN 10 ELSE -10 END),
> MAX((CASE WHEN TRUE THEN 10 ELSE -10 END))
> FROM SERGIO GROUP BY I;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2000) DatabaseMetaData.getTables fails to return table list

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2000:
--
Fix Version/s: 4.8.0

> DatabaseMetaData.getTables fails to return table list
> -
>
> Key: PHOENIX-2000
> URL: https://issues.apache.org/jira/browse/PHOENIX-2000
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.1
> Environment: HDP 2.2 with hbase 4.3.1 server
>Reporter: Bilal Nemutlu
>Priority: Critical
> Fix For: 4.8.0
>
>
>  DatabaseMetaData md = conn.getMetaData();
>   ResultSet rst = md.getTables(null, null, null, null);
>   
>   while (rst.next()) {
> System.out.println(rst.getString(1));
>   }
> Throws the following error
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> SYSTEM.CATALOG,,1432187115973.c970b53a96db5a8c1d958ac920bc45d5.: Could not 
> initialize class org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:200)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1663)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3093)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.monitoring.PhoenixMetrics$SizeMetric.update(PhoenixMetrics.java:59)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:95)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:102)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:108)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(ScanRegionObserver.java:232)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:219)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:173)
>   ... 9 more
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> SYSTEM.CATALOG,,1432187115973.c970b53a96db5a8c1d958ac920bc45d5.: Could not 
> initialize class org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:200)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1663)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3093)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.monitoring.PhoenixMetrics$SizeMetric.update(PhoenixMetrics.java:59)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:95

[jira] [Resolved] (PHOENIX-2001) Join create OOM with java heap space on phoenix client

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2001.
---
Resolution: Not A Problem

Phoenix has config parameters that you can use to limit the amount of memory 
used by hash joins.

> Join create OOM with java heap space on phoenix client
> --
>
> Key: PHOENIX-2001
> URL: https://issues.apache.org/jira/browse/PHOENIX-2001
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.1
>Reporter: Krunal 
>
> Hi
> I have 2 issues with phoenix client:
> 1. Heap memory is not cleanup after each query is finished. So, it keeps 
> increasing every time when we submit new query.
> 2. I am try to do a normal join operation on two tables but getting 
> exception. Below is the details:
> These are some sample queries I tried:
> 1. select p1.host, count(1) from PERFORMANCE_500 p1, PERFORMANCE_2500 
> p2 where p1.host = p2.host group by p1.host; 
> 2. select p1.host from PERFORMANCE_500 p1, PERFORMANCE_2500 p2 where 
> p1.host = p2.host group by p1.host; 
> 3. select count(1) from PERFORMANCE_500 p1, PERFORMANCE_2500 p2 where 
> p1.host = p2.host group by p1.host; 
> Here is explain plan:
> explain  select count(1) from PERFORMANCE_500 p1, PERFORMANCE_2500 p2 
> where p1.host = p2.host group by p1.host;
> +--+
> |   PLAN   |
> +--+
> | CLIENT 9-CHUNK PARALLEL 1-WAY FULL SCAN OVER PERFORMANCE_500 |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [HOST] |
> | CLIENT MERGE SORT|
> | PARALLEL INNER-JOIN TABLE 0 (SKIP MERGE) |
> | CLIENT 18-CHUNK PARALLEL 1-WAY FULL SCAN OVER PERFORMANCE_2500 |
> | SERVER FILTER BY FIRST KEY ONLY |
> | DYNAMIC SERVER FILTER BY HOST IN (P2.HOST) |
> +--+
> 8 rows selected (0.127 seconds)
> Phoenix client heap size is 16GB. ( noticed that above queries are dumping 
> data in local heap, I see millions of instances for 
> org.apache.phoenix.expression.literalexpression)
> phoenix version: 4.3.1
> hbase version: 0.98.1
> and my exceptions are:
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:235)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:226)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:225)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1066)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException: 
> java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:247)
> at 
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
> at 
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: 
> java.lang.OutOfMemoryError: Java heap space
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:239)
> ... 7 more
> Caused by: java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:212)
> at 
> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:182)
> ... 4 more
> Caused by: java.lang.OutOfMemoryError: Java heap space
> May 20, 2015 4:58:01 PM ServerCommunicatorAdmin reqIncoming
> WARNING: The server has decided to close this client connection.
> 15/05/20 16:56:43 WARN client.HTable: Error calling coprocessor service 
> org.apache.phoenix.coprocessor.generated.ServerCachingProtos$Ser

[jira] [Resolved] (PHOENIX-1992) Apche phoenix website has lots of details but unable to find all the details from the site. Needs to search google for every link

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1992.
---
Resolution: Not A Problem

Use our search box on our home page.

> Apche phoenix website has lots of details but unable to find all the details 
> from the site. Needs to search google for every link
> -
>
> Key: PHOENIX-1992
> URL: https://issues.apache.org/jira/browse/PHOENIX-1992
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Ni la
>Priority: Minor
>
> There is no "site map" to navigate the lot of information it already has. it 
> is difficult to find from the project site, but we can find through google 
> search. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1993) NPE on sqlline.py start.

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1993.
---
Resolution: Cannot Reproduce

> NPE on sqlline.py start.
> 
>
> Key: PHOENIX-1993
> URL: https://issues.apache.org/jira/browse/PHOENIX-1993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Serhiy Bilousov
>Priority: Minor
>
> After updating to 4.4  sqlline throw exception on the start. It does not look 
> like it affects use afterwards thou. 
> {noformat}
> ./sqlline.py *:2181:/hbase
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:***:2181:/hbase none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:**:2181:/hbase
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/05/19 13:41:46 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Connected to: Phoenix (version 4.4)
> Driver: PhoenixEmbeddedDriver (version 4.4)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 346/346 (100%) Done
> Done
> java.lang.NullPointerException
> at java.util.TreeMap.put(TreeMap.java:563)
> at java.util.TreeSet.add(TreeSet.java:255)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:344)
> at java.util.TreeSet.addAll(TreeSet.java:312)
> at sqlline.SqlCompleter.(SqlCompleter.java:81)
> at 
> sqlline.DatabaseConnection.setCompletions(DatabaseConnection.java:84)
> at sqlline.SqlLine.setCompletions(SqlLine.java:1730)
> at sqlline.Commands.connect(Commands.java:1066)
> at sqlline.Commands.connect(Commands.java:996)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:804)
> at sqlline.SqlLine.initArgs(SqlLine.java:588)
> at sqlline.SqlLine.begin(SqlLine.java:656)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> sqlline version 1.1.8
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1989) Implement byte-based INSTR instead of serializing into String

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1989:
--
Fix Version/s: 4.8.0

> Implement byte-based INSTR instead of serializing into String
> -
>
> Key: PHOENIX-1989
> URL: https://issues.apache.org/jira/browse/PHOENIX-1989
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> The current implementation of INSTR serializes the arguments as Strings. It'd 
> be much more efficient to leave them as bytes and do the in-string search 
> based on bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1989) Implement byte-based INSTR instead of serializing into String

2016-01-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113991#comment-15113991
 ] 

James Taylor commented on PHOENIX-1989:
---

I believe this is already fixed.

> Implement byte-based INSTR instead of serializing into String
> -
>
> Key: PHOENIX-1989
> URL: https://issues.apache.org/jira/browse/PHOENIX-1989
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> The current implementation of INSTR serializes the arguments as Strings. It'd 
> be much more efficient to leave them as bytes and do the in-string search 
> based on bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1983) Document how to turn trace on/off and set sampling rate through SQL query

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1983:
--
Fix Version/s: 4.8.0

> Document how to turn trace on/off and set sampling rate through SQL query
> -
>
> Key: PHOENIX-1983
> URL: https://issues.apache.org/jira/browse/PHOENIX-1983
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1988) Document INSTR built-in function

2016-01-23 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1988.
---
Resolution: Fixed

Already fixed.

> Document INSTR built-in function
> 
>
> Key: PHOENIX-1988
> URL: https://issues.apache.org/jira/browse/PHOENIX-1988
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Naveen Madhire
>
> Last pending task for new built-in function is to add documentation for it so 
> that it appears in our Reference page here: 
> http://phoenix.apache.org/language/functions.html
> That documentation is generated from the following file (which lives in SVN):
> phoenix-docs/src/docsrc/help/phoenix.csv
> See http://phoenix.apache.org/building_website.html for more info.
> Just copy/paste the documentation from another built-in function. Make sure 
> it's classified as a String function (copy/paste SUBSTR function 
> documentation as a template). Note that due to a bug, you'll need to manually 
> remove the generated html files (rm site/publish/language/*.html) before 
> running build.sh in order for them to get regenerated.
> I'd wait to do this until we're closer to a 4.4.1 release, as I'd rather not 
> document this until PHOENIX-1984 is fixed and in a release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >