[jira] [Commented] (PHOENIX-3830) Phoenix in the condition query and sort, will lead to the query is very slow. Version: 4.10

2017-05-10 Thread Sean (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005814#comment-16005814
 ] 

Sean commented on PHOENIX-3830:
---

I raised this question in version 4.9.0 without this problem. I do not think it 
is optimization.

> Phoenix in the condition query and sort, will lead to the query is very slow. 
> Version: 4.10
> ---
>
> Key: PHOENIX-3830
> URL: https://issues.apache.org/jira/browse/PHOENIX-3830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Sean
>
> Phoenix in the condition query and sort, will lead to the query is very slow.
> Version: 4.10.0-HBase-1.2
> Dataset: Install the package sample data.
> Sql: SELECT HOST, DOMAIN, FEATURE, DATE, CORE, DB, ACTIVE_VISITOR FROM 
> WEB_STAT T WHERE CORE> 100 ORDER BY CORE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3811) Do not disable index on write failure by default

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005749#comment-16005749
 ] 

Hadoop QA commented on PHOENIX-3811:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12867455/PHOENIX-3811_v3.patch
  against master branch at commit 37d0a4a038c1f843db2a1d68cfc3b3cfa8c8d537.
  ATTACHMENT ID: 12867455

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), ReadOnlyProps.EMPTY_PROPS);
+public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped, Boolean disableIndexOnWriteFailure, Boolean 
rebuildIndexOnWriteFailure) {
++ (disableIndexOnWriteFailure == null ? "" : (", " + 
PhoenixIndexFailurePolicy.DISABLE_INDEX_ON_WRITE_FAILURE + "=" + 
disableIndexOnWriteFailure))
++ (rebuildIndexOnWriteFailure == null ? "" : (", " + 
PhoenixIndexFailurePolicy.REBUILD_INDEX_ON_WRITE_FAILURE + "=" + 
rebuildIndexOnWriteFailure));
+this.leaveIndexActiveOnFailure = ! (disableIndexOnWriteFailure == null 
? QueryServicesOptions.DEFAULT_INDEX_FAILURE_DISABLE_INDEX : 
disableIndexOnWriteFailure);
+serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB, 
Boolean.TRUE.toString());
+Map clientProps = 
Collections.singletonMap(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
+@Parameters(name = 
"MutableIndexFailureIT_transactional={0},localIndex={1},isNamespaceMapped={2},disableIndexOnWriteFailure={3},rebuildIndexOnWriteFailure={4}")
 // name is used by failsafe as file name in reports
+"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
indexName + " ON " + fullTableName + " (v1) INCLUDE (v2)");
+"CREATE "  + (!localIndex ? "LOCAL " : "") + " INDEX " + 
secondIndexName + " ON " + fullTableName + " (v2) INCLUDE (v1)");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/858//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/858//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/858//console

This message is automatically generated.

> Do not disable index on write failure by default
> 
>
> Key: PHOENIX-3811
> URL: https://issues.apache.org/jira/browse/PHOENIX-3811
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3811_v1.patch, PHOENIX-3811_v2.patch, 
> PHOENIX-3811_v3.patch, PHOENIX-3811-wip1.patch, PHOENIX-3811-wip2.patch, 
> PHOENIX-3811-wip3.patch, PHOENIX-3811-wip4.patch, PHOENIX-3811-wip5.patch, 
> PHOENIX-3811-wip7.patch
>
>
> We should provide a way to configure the system so that the server takes no 
> specific action when an index write fails. Since we always throw the write 
> failure back to the client, the client can often deal with failures more 
> easily than the server since they have the batch of mutations in memory. 
> Often times, allowing access to an index that may be one batch behind the 
> data table is better than disabling it given the negative performance that 
> will occur while the index cannot be written to.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3842) Turn off all BloomFilter for Phoenix tables

2017-05-10 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved PHOENIX-3842.
-
Resolution: Fixed

Committed

> Turn off all BloomFilter for Phoenix tables
> ---
>
> Key: PHOENIX-3842
> URL: https://issues.apache.org/jira/browse/PHOENIX-3842
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.11.0
>
> Attachments: 3842-4.x-0.98.txt
>
>
> Noticed in PHOENIX-3797. Exception with BFs, when there shouldn't have been 
> BFs in the first place.
> Turns out BFs are ROW by default in HBase.
> For Phoenix we should turn them off, they are not used for SCAN's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3842) Turn off all BloomFilter for Phoenix tables

2017-05-10 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005729#comment-16005729
 ] 

Andrew Purtell commented on PHOENIX-3842:
-

Committing

> Turn off all BloomFilter for Phoenix tables
> ---
>
> Key: PHOENIX-3842
> URL: https://issues.apache.org/jira/browse/PHOENIX-3842
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.11.0
>
> Attachments: 3842-4.x-0.98.txt
>
>
> Noticed in PHOENIX-3797. Exception with BFs, when there shouldn't have been 
> BFs in the first place.
> Turns out BFs are ROW by default in HBase.
> For Phoenix we should turn them off, they are not used for SCAN's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3842) Turn off all BloomFilter for Phoenix tables

2017-05-10 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005720#comment-16005720
 ] 

Andrew Purtell commented on PHOENIX-3842:
-

lgtm

> Turn off all BloomFilter for Phoenix tables
> ---
>
> Key: PHOENIX-3842
> URL: https://issues.apache.org/jira/browse/PHOENIX-3842
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.11.0
>
> Attachments: 3842-4.x-0.98.txt
>
>
> Noticed in PHOENIX-3797. Exception with BFs, when there shouldn't have been 
> BFs in the first place.
> Turns out BFs are ROW by default in HBase.
> For Phoenix we should turn them off, they are not used for SCAN's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


HBase 2.0 future integration.

2017-05-10 Thread Sergey Soldatov
Hi,

Well, HBase 2.0 will be released in the near future and we need to think
about adopting Phoenix to it. I tried to do that and already feel
uncomfortable with the amount of changes related to existing and potential
problems. There are the list of problems I'm aware at the moment:
1. Deprecated API. No surprise that most of the deprecated 0.9x API was
removed. Such as:
   a. 'add' method for Put. We use it all across the code and tests.
   b. HBaseAdmin replaced with Admin.
   c. HTableInterface removed.
   d. Public API should use Cell instead of KeyValue
   e. Delete.deleteColumn => Delete.addColumn.
   f. There are some other small changes that requires small modification
(like .batch now requires an array for the result instead of
returning it)
2.  Due the shading stuff RPC callback need to use a new API
from CoprocessorRpcUtils.
3.  No more "new HTable(...)". To get Table we have to create unmanaged
connection and use .getTable.

As for a potential problems:
1. new AM makes me worry in terms of the support for local indexes during
split/merge.
2.  Tephra uses deprecated API as well, so it requires similar changes.


So, here are my ideas :
1. start with something that we can change right now (API changes that
would work with all supported versions of HBase) to minimize the work and
amount of changes when 2.0 is released.
 2. Decide what we are going to do with 0.98 support. Whether we plan to
EOL it, or as an alternative we may create some kind of driver to HBase, so
we will be able to keep all changes  for  different versions of HBase at a
single place.


Thanks,
Sergey


[jira] [Updated] (PHOENIX-3811) Do not disable index on write failure by default

2017-05-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3811:
--
Attachment: PHOENIX-3811_v3.patch

Removed ReadOnlyIndexFailureIT as it is a duplicate of MutableIndexFailureIT. 
Can't reproduce the other test failures - let's see how this patch does with 
auto build.

> Do not disable index on write failure by default
> 
>
> Key: PHOENIX-3811
> URL: https://issues.apache.org/jira/browse/PHOENIX-3811
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3811_v1.patch, PHOENIX-3811_v2.patch, 
> PHOENIX-3811_v3.patch, PHOENIX-3811-wip1.patch, PHOENIX-3811-wip2.patch, 
> PHOENIX-3811-wip3.patch, PHOENIX-3811-wip4.patch, PHOENIX-3811-wip5.patch, 
> PHOENIX-3811-wip7.patch
>
>
> We should provide a way to configure the system so that the server takes no 
> specific action when an index write fails. Since we always throw the write 
> failure back to the client, the client can often deal with failures more 
> easily than the server since they have the batch of mutations in memory. 
> Often times, allowing access to an index that may be one batch behind the 
> data table is better than disabling it given the negative performance that 
> will occur while the index cannot be written to.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3841) Phoenix View creation failure with Primary table not found error when we use update_cache_frequency for primary table

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005485#comment-16005485
 ] 

Hadoop QA commented on PHOENIX-3841:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12867398/PHOENIX-3841.v4.patch
  against master branch at commit 37d0a4a038c1f843db2a1d68cfc3b3cfa8c8d537.
  ATTACHMENT ID: 12867398

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+"CREATE TABLE "+TABLE_NAME+" (k VARCHAR PRIMARY KEY, v1 VARCHAR, 
v2 VARCHAR) UPDATE_CACHE_FREQUENCY=100");
+  conn1.createStatement().execute("upsert into "+TABLE_NAME+" values 
('row1', 'value1', 'key1')");
+"CREATE VIEW "+VIEW_NAME+" (v43 VARCHAR) AS SELECT * FROM 
"+TABLE_NAME+" WHERE v1 = 'value1'");
+// We need to always get the latest meta data for the parent table 
of a create view call to ensure that
+// that we're copying the current table meta data as of when the 
view is created. Once we no longer
+// copy the parent meta data, but store only the local diffs 
(PHOENIX-3534), we will no longer need
+SingleTableColumnResolver visitor = new 
SingleTableColumnResolver(connection, tableNode, true, true);
+  this(connection, tableNode, updateCacheImmediately, 0, new 
HashMap(1), alwaysHitServer);
+Map udfParseNodes, boolean 
alwaysHitServer) throws SQLException {
+TableRef tableRef = 
createTableRef(tableNode.getName().getSchemaName(), tableNode, 
updateCacheImmediately, alwaysHitServer);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TransactionalViewIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/856//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/856//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/856//console

This message is automatically generated.

> Phoenix View creation failure with Primary table not found error when we use 
> update_cache_frequency for primary table
> -
>
> Key: PHOENIX-3841
> URL: https://issues.apache.org/jira/browse/PHOENIX-3841
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 4.11
>
> Attachments: PHOENIX-3841.patch, PHOENIX-3841.v2.patch, 
> PHOENIX-3841.v3.patch, PHOENIX-3841.v4.patch
>
>
> Create VIEW command failing with actual table not found error and next retry 
> failed with VIEW already exists error..And its continuing like that(first 
> tabelnotfound and then view already exists)..
> If I create table without UPDATE_CACHE_FREQUENCY then its working fine.
> Create table command:
> create table UpdateCacheViewTestB (k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 
> VARCHAR) UPDATE_CACHE_FREQUENCY=10;
> Create View command:
> CREATE VIEW my_view (v43 VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE 
> v1 = 'value1’;
> sqlline Console output:
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> select * from 
> UPDATECACHEVIEWTESTB;
> --
> K V1  V2
> --
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 

[jira] [Commented] (PHOENIX-3811) Do not disable index on write failure by default

2017-05-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005482#comment-16005482
 ] 

James Taylor commented on PHOENIX-3811:
---

Thanks, [~tdsilva] - I'll fix the test failures and upload a new patch.

> Do not disable index on write failure by default
> 
>
> Key: PHOENIX-3811
> URL: https://issues.apache.org/jira/browse/PHOENIX-3811
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3811_v1.patch, PHOENIX-3811_v2.patch, 
> PHOENIX-3811-wip1.patch, PHOENIX-3811-wip2.patch, PHOENIX-3811-wip3.patch, 
> PHOENIX-3811-wip4.patch, PHOENIX-3811-wip5.patch, PHOENIX-3811-wip7.patch
>
>
> We should provide a way to configure the system so that the server takes no 
> specific action when an index write fails. Since we always throw the write 
> failure back to the client, the client can often deal with failures more 
> easily than the server since they have the batch of mutations in memory. 
> Often times, allowing access to an index that may be one batch behind the 
> data table is better than disabling it given the negative performance that 
> will occur while the index cannot be written to.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3811) Do not disable index on write failure by default

2017-05-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005218#comment-16005218
 ] 

James Taylor edited comment on PHOENIX-3811 at 5/10/17 7:19 PM:


High level summary of changes: turns off automatic index rebuilding by default, 
leaves indexes active upon a write failure, and provides a means of users 
replaying a commit after a write failure to ensure the index is consistent with 
data table. Would you have any spare cycles to review, [~tdsilva]? 

Here's some more detail on the changes:
* Turns off the background partial index rebuild/catchup task by default for a 
table. The reason is that users will typically have some kind of retry strategy 
themselves (for example, a message queue that retries). They need this as when 
a commit exception occurs, some of the data rows may have been written while 
others will not have been (regardless of what state the index is in wrt the 
data table). What ever retry mechanism is in use, these retries will also get 
the index back in sync (see below for a new mechanism for mutable tables).
* Provides a means for the client to retry a commit at the timestamp at which 
it was originally submitted. This is important for mutable data as otherwise 
the retried commits may overwrite successful commits that occurred later. This 
is accomplished by a) including the server timestamp at which the data rows 
were (or would have been) committed in CommitException and b) Adds a new 
connection property, {{PhoenixRuntime.REPLAY_AT_ATTRIB}}, which specifies a 
timestamp at which the commit will occur and tells the system to ignore later 
data updates (to ensure your index remains in sync with your data table).
* Provides an option (the default) to keep an index active even after a write 
failure occurs. Many use cases are essentially down without the secondary index 
in place and would rather the index be behind by a few rows wrt the data table 
while the retries are occurring. This option is configurable globally with the 
{{QueryServices.INDEX_FAILURE_DISABLE_INDEX}} config property and on a table by 
table basis through the 
{{PhoenixIndexFailurePolicy.DISABLE_INDEX_ON_WRITE_FAILURE}} table descriptor 
property.
* Provides an option to turn on the partial rebuild index task on a 
table-by-table basis (false by default). This option is orthogonal now to 
whether an index remains active or is disabled (i.e. the index can remain 
active *and* be partially rebuilt/caught up in the background). Note that if 
the existing global 
{{PhoenixIndexFailurePolicy.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB}} config 
property is false, then the background thread won't run so the table property 
won't matter. By default, the global property is true while the table-by-table 
property is false to allow the user to turn the automatic rebuild on for a 
particular table.
* Lowers the default frequency at which we look for indexes which need to be 
partially rebuilt from every 10 seconds to once per minute.
* Fixes MutableIndexFailureIT test failures and adds more for the above new 
options.

FYI, [~lhofhansl], [~apurtell], [~mvanwely].


was (Author: jamestaylor):
High level summary of changes: turns off automatic index rebuilding by default, 
leaves indexes active upon a write failure, and provides a means of users 
replaying a commit after a write failure to ensure the index is consistent with 
data table. Would you have any spare cycles to review, [~tdsilva]? 

Here's some more detail on the changes:
* Turns off the background partial index rebuild/catchup task by default for a 
table. The reason is that users will typically have some kind of retry strategy 
themselves (for example, a message queue that retries). They need this as when 
a commit exception occurs, some of the data rows may have been written while 
others will not have been (regardless of what state the index is in wrt the 
data table). What ever retry mechanism is in use, these retries will also get 
the index back in sync (see below for a new mechanism for mutable tables).
* Provides a means for the client to retry a commit at the timestamp at which 
it was originally submitted. This is important for mutable data as otherwise 
the retried commits may overwrite successful commits that occurred later. This 
is accomplished by a) including the server timestamp at which the data rows 
were (or would have been) committed in CommitException and b) Adds a new 
connection property, {{PhoenixRuntime.REPLAY_AT_ATTRIB}}, which specifies a 
timestamp at which the commit will occur and tells the system to ignore later 
data updates (to ensure your index remains in sync with your data table).
* Provides an option (the default) to keep an index active even after a write 
failure occurs. Many use cases are essentially down without the secondary index 
in place and would rather the index be behind by a few rows 

[jira] [Comment Edited] (PHOENIX-3811) Do not disable index on write failure by default

2017-05-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005218#comment-16005218
 ] 

James Taylor edited comment on PHOENIX-3811 at 5/10/17 7:00 PM:


High level summary of changes: turns off automatic index rebuilding by default, 
leaves indexes active upon a write failure, and provides a means of users 
replaying a commit after a write failure to ensure the index is consistent with 
data table. Would you have any spare cycles to review, [~tdsilva]? 

Here's some more detail on the changes:
* Turns off the background partial index rebuild/catchup task by default for a 
table. The reason is that users will typically have some kind of retry strategy 
themselves (for example, a message queue that retries). They need this as when 
a commit exception occurs, some of the data rows may have been written while 
others will not have been (regardless of what state the index is in wrt the 
data table). What ever retry mechanism is in use, these retries will also get 
the index back in sync (see below for a new mechanism for mutable tables).
* Provides a means for the client to retry a commit at the timestamp at which 
it was originally submitted. This is important for mutable data as otherwise 
the retried commits may overwrite successful commits that occurred later. This 
is accomplished by a) including the server timestamp at which the data rows 
were (or would have been) committed in CommitException and b) Adds a new 
connection property, {{PhoenixRuntime.REPLAY_AT_ATTRIB}}, which specifies a 
timestamp at which the commit will occur and tells the system to ignore later 
data updates (to ensure your index remains in sync with your data table).
* Provides an option (the default) to keep an index active even after a write 
failure occurs. Many use cases are essentially down without the secondary index 
in place and would rather the index be behind by a few rows wrt the data table 
while the retries are occurring. This option is configurable globally with the 
{{QueryServices.INDEX_FAILURE_DISABLE_INDEX}} config property and on a table by 
table basis through the 
{{PhoenixIndexFailurePolicy.DISABLE_INDEX_ON_WRITE_FAILURE}} table descriptor 
property.
* Provides an option to turn on the partial rebuild index task on a 
table-by-table basis (false by default). This option is orthogonal now to 
whether an index remains active or is disabled. Note that if the existing 
global {{PhoenixIndexFailurePolicy.INDEX_FAILURE_HANDLING_REBUILD_ATTRIB}} 
config property is false, then the background thread won't run so the table 
property won't matter. By default, the global property is true while the 
table-by-table property is false to allow the user to turn the automatic 
rebuild on for a particular table.
* Lowers the default frequency at which we look for indexes which need to be 
partially rebuilt from every 10 seconds to once per minute.
* Fixes MutableIndexFailureIT test failures and adds more for the above new 
options.

FYI, [~lhofhansl], [~apurtell], [~mvanwely].


was (Author: jamestaylor):
High level summary of changes: turns off automatic index rebuilding by default, 
leaves indexes active upon a write failure, and provides a means of users 
replaying a commit after a write failure to ensure the index is consistent with 
data table. Would you have any spare cycles to review, [~tdsilva]? 

Here's some more detail on the changes:
- Turns off the background partial index rebuild/catchup task by default for a 
table. The reason is that users will typically have some kind of retry strategy 
themselves (for example, a message queue that retries). They need this as when 
a commit exception occurs, some of the data rows may have been written while 
others will not have been (regardless of what state the index is in wrt the 
data table). What ever retry mechanism is in use, these retries will also get 
the index back in sync (see below for a new mechanism for mutable tables).
- Provides a means for the client to retry a commit at the timestamp at which 
it was originally submitted. This is important for mutable data as otherwise 
the retried commits may overwrite successful commits that occurred later. This 
is accomplished by a) including the server timestamp at which the data rows 
were (or would have been) committed in CommitException and b) Adds a new 
connection property, {{PhoenixRuntime.REPLAY_AT_ATTRIB}}, which specifies a 
timestamp at which the commit will occur and tells the system to ignore later 
data updates (to ensure your index remains in sync with your data table).
- Provides an option (the default) to keep an index active even after a write 
failure occurs. Many use cases are essentially down without the secondary index 
in place and would rather the index be behind by a few rows wrt the data table 
while the retries are occurring. This option is configurable globally 

[jira] [Commented] (PHOENIX-3841) Phoenix View creation failure with Primary table not found error when we use update_cache_frequency for primary table

2017-05-10 Thread Maddineni Sukumar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005170#comment-16005170
 ] 

Maddineni Sukumar commented on PHOENIX-3841:


Thanks [~singamteja] , attached v4 with variable name change. 

> Phoenix View creation failure with Primary table not found error when we use 
> update_cache_frequency for primary table
> -
>
> Key: PHOENIX-3841
> URL: https://issues.apache.org/jira/browse/PHOENIX-3841
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 4.11
>
> Attachments: PHOENIX-3841.patch, PHOENIX-3841.v2.patch, 
> PHOENIX-3841.v3.patch, PHOENIX-3841.v4.patch
>
>
> Create VIEW command failing with actual table not found error and next retry 
> failed with VIEW already exists error..And its continuing like that(first 
> tabelnotfound and then view already exists)..
> If I create table without UPDATE_CACHE_FREQUENCY then its working fine.
> Create table command:
> create table UpdateCacheViewTestB (k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 
> VARCHAR) UPDATE_CACHE_FREQUENCY=10;
> Create View command:
> CREATE VIEW my_view (v43 VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE 
> v1 = 'value1’;
> sqlline Console output:
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> select * from 
> UPDATECACHEVIEWTESTB;
> --
> K V1  V2
> --
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB 
> (state=42M03,code=1012)
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1013 (42M04): Table already exists. tableName=MY_VIEW 
> (state=42M04,code=1013)
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB 
> (state=42M03,code=1012)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3841) Phoenix View creation failure with Primary table not found error when we use update_cache_frequency for primary table

2017-05-10 Thread Maddineni Sukumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maddineni Sukumar updated PHOENIX-3841:
---
Attachment: PHOENIX-3841.v4.patch

> Phoenix View creation failure with Primary table not found error when we use 
> update_cache_frequency for primary table
> -
>
> Key: PHOENIX-3841
> URL: https://issues.apache.org/jira/browse/PHOENIX-3841
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 4.11
>
> Attachments: PHOENIX-3841.patch, PHOENIX-3841.v2.patch, 
> PHOENIX-3841.v3.patch, PHOENIX-3841.v4.patch
>
>
> Create VIEW command failing with actual table not found error and next retry 
> failed with VIEW already exists error..And its continuing like that(first 
> tabelnotfound and then view already exists)..
> If I create table without UPDATE_CACHE_FREQUENCY then its working fine.
> Create table command:
> create table UpdateCacheViewTestB (k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 
> VARCHAR) UPDATE_CACHE_FREQUENCY=10;
> Create View command:
> CREATE VIEW my_view (v43 VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE 
> v1 = 'value1’;
> sqlline Console output:
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> select * from 
> UPDATECACHEVIEWTESTB;
> --
> K V1  V2
> --
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB 
> (state=42M03,code=1012)
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1013 (42M04): Table already exists. tableName=MY_VIEW 
> (state=42M04,code=1013)
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB 
> (state=42M03,code=1012)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3734) Refactor Phoenix to use TAL instead of direct calls to Tephra

2017-05-10 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-3734.
-
Resolution: Fixed

+1, I committed this patch to the omid branch.

> Refactor Phoenix to use TAL instead of direct calls to Tephra
> -
>
> Key: PHOENIX-3734
> URL: https://issues.apache.org/jira/browse/PHOENIX-3734
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Refactor Phoenix to use the new transaction abstraction layer instead of 
> direct calls to Tephra. Once this task will be committed, Phoenix will 
> continue working with Tephra but will have the option for fast integration of 
> new transaction processing engines.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3845) Data table and secondary index out of sync on partial data table write failure

2017-05-10 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3845:
-

 Summary: Data table and secondary index out of sync on partial 
data table write failure
 Key: PHOENIX-3845
 URL: https://issues.apache.org/jira/browse/PHOENIX-3845
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


The data table and secondary index are out of sync if the write to the data 
table is only partially successful. We should attempt to update the indexes 
with the rows that were successfully written before throwing the original 
exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3734) Refactor Phoenix to use TAL instead of direct calls to Tephra

2017-05-10 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004587#comment-16004587
 ] 

Ohad Shacham commented on PHOENIX-3734:
---

Hi [~giacomotay...@gmail.com] and [~tdsilva],

This patch is compared to the master and therefore, it contains the 
implementation of [PHOENIX-3656] and [PHOENIX-3671].
I merged all the commits that were done to the master since I forked.

In my opinion, it is better to commit this patch to the mainline branch and 
continue working on the master. 
This pull request maintains the same semantics that was before and therefore, 
maintains b/w compatibility. I only abstracted the transaction processing part 
and implemented what need for Tephra. Everything should work as before.
I run mvn verify and experienced a few failures that also occurred while 
running mvn verify on the master branch and therefore, are not related to the 
pull request changes.

I fixed the naming conversion to camelCase.

The following pull requests will be Omid related and can also be committed to 
the master branch since the Omid option will be disabled until completion.


> Refactor Phoenix to use TAL instead of direct calls to Tephra
> -
>
> Key: PHOENIX-3734
> URL: https://issues.apache.org/jira/browse/PHOENIX-3734
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Refactor Phoenix to use the new transaction abstraction layer instead of 
> direct calls to Tephra. Once this task will be committed, Phoenix will 
> continue working with Tephra but will have the option for fast integration of 
> new transaction processing engines.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3712) Column alias can't be resolved in GROUP BY clause

2017-05-10 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-3712.
--
Resolution: Fixed

> Column alias can't be resolved in GROUP BY clause
> -
>
> Key: PHOENIX-3712
> URL: https://issues.apache.org/jira/browse/PHOENIX-3712
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3712.patch
>
>
> There are quite a few test cases written like:
> {{select v1 - 1 as v, sum(v2) from t group by v}} in Phoenix test suite, and 
> calcite does not allow using aliases defined in SELECT list to be referred by 
> GROUP BY clause (ok for ORDER BY though), shall we change those test cases? 
> FYI, [~julianhyde], [~jamestaylor], [~rajeshbabu], [~kliew]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3843) Improve logging for UNION ALL errors

2017-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004259#comment-16004259
 ] 

Hadoop QA commented on PHOENIX-3843:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12867265/PHOENIX-3843.patch
  against master branch at commit 37d0a4a038c1f843db2a1d68cfc3b3cfa8c8d537.
  ATTACHMENT ID: 12867265

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
47 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
.SELECT_COLUMN_NUM_IN_UNIONALL_DIFFS).setMessage("1st query has " + columnCount 
+ " columns whereas 2nd " +
++ targetTypes.get(i).getType().getSqlTypeName() + " in 1st 
query where as it is "
+String create = "CREATE TABLE s.t1 (k integer not null primary 
key, f1.v1 varchar, f1.v2 varchar, " +
+create = "CREATE TABLE s.t2 (k integer not null primary key, f1.v1 
varchar, f1.v2 varchar, f2.v3 varchar)";
+assertEquals(e.getMessage(), "ERROR 525 (42902): SELECT column 
number differs in a Union All query " +
+String create = "CREATE TABLE s.t1 (k integer not null primary 
key, f1.v1 varchar, f1.v2 varchar, " +
+create = "CREATE TABLE s.t2 (k integer not null primary key, f1.v1 
varchar, f1.v2 integer, " +
+assertEquals(e.getMessage(), "ERROR 526 (42903): SELECT column 
types differ in a Union All query " +
+"is not allowed. Column # 2 is VARCHAR in 1st query where 
as it is INTEGER in 2nd query");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/855//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/855//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/855//console

This message is automatically generated.

> Improve logging for UNION ALL errors
> 
>
> Key: PHOENIX-3843
> URL: https://issues.apache.org/jira/browse/PHOENIX-3843
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Minor
> Fix For: 4.11
>
> Attachments: PHOENIX-3843.patch
>
>
> At the moment if there are hundreds of columns in UNION ALL query it's quite 
> hard to understand why the query fails. At least we may report how the number 
> of columns are differ and which column has incompatible data type.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3831) Support stored procedure

2017-05-10 Thread Anishek Kamal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004231#comment-16004231
 ] 

Anishek Kamal commented on PHOENIX-3831:


Waiting for your response [~jamestaylor].

> Support stored procedure
> 
>
> Key: PHOENIX-3831
> URL: https://issues.apache.org/jira/browse/PHOENIX-3831
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
> Environment: centos 7, Hbase 1.2 , hadoop 6 node cluster
>Reporter: Anishek Kamal
>  Labels: features
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There are mainly two major problem - 
> 1) How to create Stored procedure in phoenix
> 2)How to create parameterized procedure
> I am using apache phoenix to connect to hbase, so that I can work on hbase 
> using sql commands.
> But I am stuck at this point, as I need stored procedure to work and also 
> need to pass input parameters.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3843) Improve logging for UNION ALL errors

2017-05-10 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-3843:
-
Attachment: PHOENIX-3843.patch

Simple patch plus a couple tests. Suggestions for better report messages are 
welcome.

> Improve logging for UNION ALL errors
> 
>
> Key: PHOENIX-3843
> URL: https://issues.apache.org/jira/browse/PHOENIX-3843
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Minor
> Fix For: 4.11
>
> Attachments: PHOENIX-3843.patch
>
>
> At the moment if there are hundreds of columns in UNION ALL query it's quite 
> hard to understand why the query fails. At least we may report how the number 
> of columns are differ and which column has incompatible data type.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3841) Phoenix View creation failure with Primary table not found error when we use update_cache_frequency for primary table

2017-05-10 Thread Loknath Priyatham Teja Singamsetty (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004196#comment-16004196
 ] 

Loknath Priyatham Teja Singamsetty  commented on PHOENIX-3841:
--

Minor typo [~sukuna...@gmail.com] in the patch, in couple of places correct 
alwaysHitSerer -> alwaysHisServer

> Phoenix View creation failure with Primary table not found error when we use 
> update_cache_frequency for primary table
> -
>
> Key: PHOENIX-3841
> URL: https://issues.apache.org/jira/browse/PHOENIX-3841
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 4.11
>
> Attachments: PHOENIX-3841.patch, PHOENIX-3841.v2.patch, 
> PHOENIX-3841.v3.patch
>
>
> Create VIEW command failing with actual table not found error and next retry 
> failed with VIEW already exists error..And its continuing like that(first 
> tabelnotfound and then view already exists)..
> If I create table without UPDATE_CACHE_FREQUENCY then its working fine.
> Create table command:
> create table UpdateCacheViewTestB (k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 
> VARCHAR) UPDATE_CACHE_FREQUENCY=10;
> Create View command:
> CREATE VIEW my_view (v43 VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE 
> v1 = 'value1’;
> sqlline Console output:
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> select * from 
> UPDATECACHEVIEWTESTB;
> --
> K V1  V2
> --
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB 
> (state=42M03,code=1012)
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1013 (42M04): Table already exists. tableName=MY_VIEW 
> (state=42M04,code=1013)
> 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 
> VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1';
> Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB 
> (state=42M03,code=1012)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3712) Column alias can't be resolved in GROUP BY clause

2017-05-10 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned PHOENIX-3712:


Assignee: Rajeshbabu Chintaguntla  (was: Maryann Xue)

> Column alias can't be resolved in GROUP BY clause
> -
>
> Key: PHOENIX-3712
> URL: https://issues.apache.org/jira/browse/PHOENIX-3712
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3712.patch
>
>
> There are quite a few test cases written like:
> {{select v1 - 1 as v, sum(v2) from t group by v}} in Phoenix test suite, and 
> calcite does not allow using aliases defined in SELECT list to be referred by 
> GROUP BY clause (ok for ORDER BY though), shall we change those test cases? 
> FYI, [~julianhyde], [~jamestaylor], [~rajeshbabu], [~kliew]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3712) Column alias can't be resolved in GROUP BY clause

2017-05-10 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3712:
-
Attachment: PHOENIX-3712.patch

Now CALCITE-1306 is committed in calcite so making related changes in Phoenix 
support aliases in Phoenix. Going to commit it.

> Column alias can't be resolved in GROUP BY clause
> -
>
> Key: PHOENIX-3712
> URL: https://issues.apache.org/jira/browse/PHOENIX-3712
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>  Labels: calcite
> Attachments: PHOENIX-3712.patch
>
>
> There are quite a few test cases written like:
> {{select v1 - 1 as v, sum(v2) from t group by v}} in Phoenix test suite, and 
> calcite does not allow using aliases defined in SELECT list to be referred by 
> GROUP BY clause (ok for ORDER BY though), shall we change those test cases? 
> FYI, [~julianhyde], [~jamestaylor], [~rajeshbabu], [~kliew]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)