[jira] [Commented] (PHOENIX-4678) IndexScrutinyTool generates malformed query due to incorrect table name(s)

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462675#comment-16462675
 ] 

James Taylor commented on PHOENIX-4678:
---

Ping [~elserj] & [~sergey.soldatov]?

> IndexScrutinyTool generates malformed query due to incorrect table name(s)
> --
>
> Key: PHOENIX-4678
> URL: https://issues.apache.org/jira/browse/PHOENIX-4678
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4678.diff
>
>
> {noformat}
> HADOOP_CLASSPATH="/usr/local/lib/hbase/conf:$(hbase mapredcp)" hadoop jar 
> /usr/local/lib/phoenix-5.0.0-SNAPSHOT/phoenix-5.0.0-SNAPSHOT-client.jar 
> org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt J -it 
> INDEX1{noformat}
> This ends up running queries like {{SELECT ... FROM .J}} and {{SELECT ... 
> FROM .INDEX1}}.
> This is because SchemaUtil.getQualifiedTableName is not properly handling an 
> empty schema name, only a null schema name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4671) Fix minor size accounting bug for MutationSize

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4671:
-

Assignee: Lars Hofhansl

> Fix minor size accounting bug for MutationSize
> --
>
> Key: PHOENIX-4671
> URL: https://issues.apache.org/jira/browse/PHOENIX-4671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4671-v2.txt, 4671.txt
>
>
> Just ran into a bug where UPSERT INTO table ... SELECT ... FROM table would 
> fail due to "Error: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes (state=LIM02,code=730)" even with auto commit 
> on.
> Ran it through a debugger, just a simple accounting bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4671) Fix minor size accounting bug for MutationSize

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4671.
---
Resolution: Fixed

> Fix minor size accounting bug for MutationSize
> --
>
> Key: PHOENIX-4671
> URL: https://issues.apache.org/jira/browse/PHOENIX-4671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4671-v2.txt, 4671.txt
>
>
> Just ran into a bug where UPSERT INTO table ... SELECT ... FROM table would 
> fail due to "Error: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes (state=LIM02,code=730)" even with auto commit 
> on.
> Ran it through a debugger, just a simple accounting bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4644) Array modification functions should require two arguments

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4644.
---
Resolution: Fixed

> Array modification functions should require two arguments
> -
>
> Key: PHOENIX-4644
> URL: https://issues.apache.org/jira/browse/PHOENIX-4644
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4644_v1.patch, PHOENIX-4644_v2.patch
>
>
> ARRAY_APPEND, ARRAY_PREPEND, and ARRAY_CAT should require two arguments. 
> Also, if the second argument is null, we need to make sure not to have the 
> entire expression return null (but instead it should return the first 
> argument). To accomplish this, we need to have a ParseNode that overrides the 
> method controlling this and ensure it's used for these functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462687#comment-16462687
 ] 

James Taylor commented on PHOENIX-4646:
---

Ping [~sergey.soldatov]. Would be good to commit this.

> The data exceeds the max capacity for the data type error for valid scenarios.
> --
>
> Key: PHOENIX-4646
> URL: https://issues.apache.org/jira/browse/PHOENIX-4646
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4646.patch
>
>
> Here is an example:
> {noformat}
> create table test_trim_source(name varchar(160) primary key, id varchar(120), 
> address varchar(160)); 
> create table test_trim_target(name varchar(160) primary key, id varchar(10), 
> address 
>  varchar(10));
> upsert into test_trim_source values('test','test','test');
> upsert into test_trim_target select * from test_trim_source;
> {noformat}
> It fails with 
> {noformat}
> Error: ERROR 206 (22003): The data exceeds the max capacity for the data 
> type. value='test' columnName=ID (state=22003,code=206)
> java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity 
> for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
> capacity for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
>   at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat} 
> The problem is that in PVarchar.isSizeCompatible we ignore the length of the 
> value if the source has specified max size for the value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4623) Inconsistent physical view index name

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462690#comment-16462690
 ] 

James Taylor commented on PHOENIX-4623:
---

[~tdsilva] - is this a bug? [~akshita.malhotra] - how about a patch?

> Inconsistent physical view index name
> -
>
> Key: PHOENIX-4623
> URL: https://issues.apache.org/jira/browse/PHOENIX-4623
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Akshita Malhotra
>Priority: Major
>  Labels: easyfix
> Fix For: 4.14.0
>
>
> The physical view indexes are incorrectly named when table has a schema. For 
> instance, if a table name is "SCH.TABLE", during creation the physical index 
> table is named as "_IDX_SCH.TABLE" which doesn't look right. In case 
> namespaces are enabled, the physical index table is named as "SCH:_IDX_TABLE"
> The client APIs on the other hand such as 
> MetaDataUtil.getViewIndexName(String schemaName, String tableName) API to 
> retrieve the phyisical view index name returns "SCH._IDX_TABLE" which as per 
> convention returns the right name but functionally leads to wrong results as 
> this is not how the physical indexes are named during construction.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4590) Reduce log level in BaseResultIterators#getStatsForParallelizationProp when parent table not found

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4590.
---
Resolution: Won't Fix

Please reopen if you feel strongly, [~rajeshbabu].

> Reduce log level in BaseResultIterators#getStatsForParallelizationProp when 
> parent table not found
> --
>
> Key: PHOENIX-4590
> URL: https://issues.apache.org/jira/browse/PHOENIX-4590
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Minor
> Fix For: 4.14.0
>
>
> Currently when we check whether to use stats for parallelisation or not we 
> check with parent table   if the index table doesn't have stats. In this case 
> if parent table not present at connection meta. When we directly scan(which 
> is not a valid case but sometimes we may scan index directly) then we see 
> this log all the time. I think we can reduce the log level to debug or trace 
> as usually we can have parent table in the connection meta so it should be 
> very rare.
> {noformat}
> } catch (TableNotFoundException e) {
> logger.warn("Unable to find parent table \"" + 
> parentTableName + "\" of table \""
> + table.getName().getString()
> + "\" to determine USE_STATS_FOR_PARALLELIZATION",
> e);
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462705#comment-16462705
 ] 

James Taylor commented on PHOENIX-4534:
---

I'm not seeing this committed to any of the 4.x or master branches. Is it 
needed, [~rajeshbabu], [~elserj], [~sergey.soldatov]? Sounds like it's need 
maybe only in master which is on HBase 1.4?

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3919) Add hbase-hadoop2-compat as compile time dependency

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462723#comment-16462723
 ] 

James Taylor commented on PHOENIX-3919:
---

Do we need this, [~elserj], [~alexaraujo]?

> Add hbase-hadoop2-compat as compile time dependency
> ---
>
> Key: PHOENIX-3919
> URL: https://issues.apache.org/jira/browse/PHOENIX-3919
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3819.patch
>
>
> HBASE-17448 added hbase-hadoop2-compat as a required dependency for clients, 
> but it is currently a test only dependency in some Phoenix modules.
> Make it an explicit compile time dependency in those modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3856) StatementContext class constructor not honouring supplied scan object

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462731#comment-16462731
 ] 

James Taylor commented on PHOENIX-3856:
---

+1

> StatementContext class  constructor not honouring supplied scan object
> --
>
> Key: PHOENIX-3856
> URL: https://issues.apache.org/jira/browse/PHOENIX-3856
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3856.patch
>
>
> In below constructor  we are creating additional scan object instead of 
> supplied scan object. 
>  public StatementContext(PhoenixStatement statement, Scan scan) {
> this(statement, FromCompiler.EMPTY_TABLE_RESOLVER, new Scan(), new 
> SequenceManager(statement));
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-3856) StatementContext class constructor not honouring supplied scan object

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462731#comment-16462731
 ] 

James Taylor edited comment on PHOENIX-3856 at 5/3/18 4:34 PM:
---

+1. Not critical, though, because the only caller is the one arg constructor 
that passes in a new Scan anyway, but we should still fix for code cleanliness 
sake.


was (Author: jamestaylor):
+1

> StatementContext class  constructor not honouring supplied scan object
> --
>
> Key: PHOENIX-3856
> URL: https://issues.apache.org/jira/browse/PHOENIX-3856
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3856.patch
>
>
> In below constructor  we are creating additional scan object instead of 
> supplied scan object. 
>  public StatementContext(PhoenixStatement statement, Scan scan) {
> this(statement, FromCompiler.EMPTY_TABLE_RESOLVER, new Scan(), new 
> SequenceManager(statement));
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3828) Local Index - WrongRegionException when selecting column from base table and filtering on indexed column

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462735#comment-16462735
 ] 

James Taylor commented on PHOENIX-3828:
---

[~rajeshbabu]?

> Local Index - WrongRegionException when selecting column from base table and 
> filtering on indexed column
> 
>
> Key: PHOENIX-3828
> URL: https://issues.apache.org/jira/browse/PHOENIX-3828
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
>
> {noformat}
> Caused by: org.apache.hadoop.hbase.regionserver.WrongRegionException: 
> Requested row out of range for Get on HRegion 
> T,00Dxx001gES005001xx03DGQX\x7F\xFF\xFE\xB6\xE7(\x91\xDF017526052jdM  
>  ,1493854066165.f1f58ac91adc762ad3e22e7f0ae1d85e., 
> startKey='00Dxx001gES005001xx03DGQX\x7F\xFF\xFE\xB6\xE7(\x91\xDF017526052jdM
>', getEndKey()='', 
> row='\x00\x02a05001xx03DGQX\x7F\xFF\xFE\xB6\xE30\xFD\x970171318362Rz   '
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5246)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6990)
>   at 
> org.apache.phoenix.util.IndexUtil.wrapResultUsingOffset(IndexUtil.java:529)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.nextRaw(BaseScannerRegionObserver.java:500)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:283)
> {noformat}
> This is caused when a non-index column is part of select statement while 
> filtering on an indexed column: {{SELECT STD_COL FROM T WHERE INDEXED_COL < 
> 1}}.
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3475) MetaData #getTables() API doesn't return view indexes

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462738#comment-16462738
 ] 

James Taylor commented on PHOENIX-3475:
---

What should we do with this JIRA, [~akshita.malhotra]?

> MetaData #getTables() API doesn't return view indexes
> -
>
> Key: PHOENIX-3475
> URL: https://issues.apache.org/jira/browse/PHOENIX-3475
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Akshita Malhotra
>Priority: Major
> Fix For: 4.14.0
>
>
> HBase migration tool uses DatabaseMetadata#getTables() API to retrieve the 
> tables for copying data. We have found that API doesn't return base index 
> tables ( _IDX_)
> For testing purposes, we issue following DDL to generate the view and the 
> corresponding view index:
> -CREATE VIEW IF NOT EXISTS MIGRATIONTEST_VIEW (OLD_VALUE_VIEW varchar) AS 
> SELECT * FROM MIGRATIONTEST WHERE OLD_VALUE like 'E%'
> -CREATE INDEX IF NOT EXISTS MIGRATIONTEST_VIEW_IDX ON MIGRATIONTEST_VIEW 
> (OLD_VALUE_VIEW)
> By using HBase API, we were able to confirm that base index table 
> (_IDX_MIGRATIONTEST) is created. 
> Both jdbc  DatabaseMetadata API and P* getMetaDataCache API doesn't seem to 
> be returning view indexes. Also P*MetaData #getTableRef API return 
> "TableNotFoundException" when attempted to fetch PTable corresponding to the 
> base index table name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3454) ON DUPLICATE KEY construct doesn't work correctly when using lower case column names

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3454.
---
Resolution: Fixed

> ON DUPLICATE KEY construct doesn't work correctly when using lower case 
> column names
> 
>
> Key: PHOENIX-3454
> URL: https://issues.apache.org/jira/browse/PHOENIX-3454
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3454.patch, Screen Shot 2016-11-04 at 1.29.43 
> PM.png
>
>
> See this test case for a repro:
> {code}
> @Test
> public void testDeleteOnSingleLowerCaseVarcharColumn() throws Exception {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> String tableName = generateUniqueName();
> String ddl = " create table " + tableName + "(pk varchar primary key, 
> \"counter1\" varchar, \"counter2\" smallint)";
> conn.createStatement().execute(ddl);
> String dml = "UPSERT INTO " + tableName + " VALUES('a','b') ON 
> DUPLICATE KEY UPDATE \"counter1\" = null";
> conn.createStatement().execute(dml);
> conn.createStatement().execute(dml);
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName);
> assertTrue(rs.next());
> assertEquals("a",rs.getString(1));
> assertEquals(null,rs.getString(2));
> assertFalse(rs.next());
> 
> dml = "UPSERT INTO " + tableName + " VALUES('a','b',0)";
> conn.createStatement().execute(dml);
> dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
> KEY UPDATE \"counter1\" = null, \"counter2\" = \"counter2\" + 1";
> conn.createStatement().execute(dml);
> dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
> KEY UPDATE \"counter1\" = 'c', \"counter2\" = \"counter2\" + 1";
> conn.createStatement().execute(dml);
> conn.commit();
> rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName);
> assertTrue(rs.next());
> assertEquals("a",rs.getString(1));
> assertEquals("c",rs.getString(2));
> assertEquals(2,rs.getInt(3));
> assertFalse(rs.next());
> conn.close();
> }
> {code}
> After changing the column names to upper case (or removing the quotes), the 
> test passes.
> FYI, [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3454) ON DUPLICATE KEY construct doesn't work correctly when using lower case column names

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3454:
--
Fix Version/s: (was: 4.14.0)
   4.13.0

> ON DUPLICATE KEY construct doesn't work correctly when using lower case 
> column names
> 
>
> Key: PHOENIX-3454
> URL: https://issues.apache.org/jira/browse/PHOENIX-3454
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3454.patch, Screen Shot 2016-11-04 at 1.29.43 
> PM.png
>
>
> See this test case for a repro:
> {code}
> @Test
> public void testDeleteOnSingleLowerCaseVarcharColumn() throws Exception {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> String tableName = generateUniqueName();
> String ddl = " create table " + tableName + "(pk varchar primary key, 
> \"counter1\" varchar, \"counter2\" smallint)";
> conn.createStatement().execute(ddl);
> String dml = "UPSERT INTO " + tableName + " VALUES('a','b') ON 
> DUPLICATE KEY UPDATE \"counter1\" = null";
> conn.createStatement().execute(dml);
> conn.createStatement().execute(dml);
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName);
> assertTrue(rs.next());
> assertEquals("a",rs.getString(1));
> assertEquals(null,rs.getString(2));
> assertFalse(rs.next());
> 
> dml = "UPSERT INTO " + tableName + " VALUES('a','b',0)";
> conn.createStatement().execute(dml);
> dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
> KEY UPDATE \"counter1\" = null, \"counter2\" = \"counter2\" + 1";
> conn.createStatement().execute(dml);
> dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
> KEY UPDATE \"counter1\" = 'c', \"counter2\" = \"counter2\" + 1";
> conn.createStatement().execute(dml);
> conn.commit();
> rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName);
> assertTrue(rs.next());
> assertEquals("a",rs.getString(1));
> assertEquals("c",rs.getString(2));
> assertEquals(2,rs.getInt(3));
> assertFalse(rs.next());
> conn.close();
> }
> {code}
> After changing the column names to upper case (or removing the quotes), the 
> test passes.
> FYI, [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3314:
--
Fix Version/s: (was: 4.14.0)
   4.13.0

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.13.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3314.
---
Resolution: Fixed

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.13.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3856) StatementContext class constructor not honouring supplied scan object

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3856:
--
Fix Version/s: (was: 4.14.0)
   5.0.0
   4.15.0

> StatementContext class  constructor not honouring supplied scan object
> --
>
> Key: PHOENIX-3856
> URL: https://issues.apache.org/jira/browse/PHOENIX-3856
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-3856.patch
>
>
> In below constructor  we are creating additional scan object instead of 
> supplied scan object. 
>  public StatementContext(PhoenixStatement statement, Scan scan) {
> this(statement, FromCompiler.EMPTY_TABLE_RESOLVER, new Scan(), new 
> SequenceManager(statement));
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3269) Create Hive JIRAs based on limitations listed in Phoenix/Hive integragation

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462765#comment-16462765
 ] 

James Taylor commented on PHOENIX-3269:
---

Do you think we have a good handle on this, [~sergey.soldatov]? I noticed a 
number of limitations listed, but I didn't see JIRAs associated with them.

> Create Hive JIRAs based on limitations listed in Phoenix/Hive integragation
> ---
>
> Key: PHOENIX-3269
> URL: https://issues.apache.org/jira/browse/PHOENIX-3269
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Sergey Soldatov
>Priority: Minor
> Fix For: 4.14.0
>
>
> We should file specific JIRAs for Hive based on the limitations and issues 
> we've found in the Phoenix/Hive integration listed here: 
> https://phoenix.apache.org/hive_storage_handler.html#Limitations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3270) Remove @Ignore tag for TransactionIT.testNonTxToTxTableFailure()

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3270:
--
Fix Version/s: (was: 4.14.0)
   4.15.0

> Remove @Ignore tag for TransactionIT.testNonTxToTxTableFailure()
> 
>
> Key: PHOENIX-3270
> URL: https://issues.apache.org/jira/browse/PHOENIX-3270
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> We should remove the @Ignore tag for 
> TransactionIT.testNonTxToTxTableFailure(). The tests passes and it's not 
> clear why it was added in the first place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-3270) Remove @Ignore tag for TransactionIT.testNonTxToTxTableFailure()

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3270:
-

Assignee: James Taylor

> Remove @Ignore tag for TransactionIT.testNonTxToTxTableFailure()
> 
>
> Key: PHOENIX-3270
> URL: https://issues.apache.org/jira/browse/PHOENIX-3270
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> We should remove the @Ignore tag for 
> TransactionIT.testNonTxToTxTableFailure(). The tests passes and it's not 
> clear why it was added in the first place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3176:
--
Labels: newbie  (was: )

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
>Priority: Major
>  Labels: newbie
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3178:
--
Labels: newbie  (was: )

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>  Labels: newbie
> Fix For: 4.14.0
>
>
> To reproduce, use the following test:
> {code}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462782#comment-16462782
 ] 

James Taylor commented on PHOENIX-3176:
---

Ping [~an...@apache.org]?

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
>Priority: Major
>  Labels: newbie
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3099) Update to Sqlline 1.1.10

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462783#comment-16462783
 ] 

James Taylor commented on PHOENIX-3099:
---

FYI, we're on sqlline 1.2.0 now. Should we close this, [~elserj]? Any further 
action required?

> Update to Sqlline 1.1.10
> 
>
> Key: PHOENIX-3099
> URL: https://issues.apache.org/jira/browse/PHOENIX-3099
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Priority: Major
> Fix For: 4.14.0
>
>
> One of the bugfixes that sqlline 1.1.10 will likely include is a fix for 
> running SQL files which start with a comment. We should try to push for a 
> release and then upgrade Phoenix to use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2897) Some ITs are not run

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2897.
---
Resolution: Cannot Reproduce

I'm not finding the unit tests referenced, so it looks like this was fixed a 
while back. Please reopen if you see further issues.

> Some ITs are not run 
> -
>
> Key: PHOENIX-2897
> URL: https://issues.apache.org/jira/browse/PHOENIX-2897
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 4.14.0
>
>
> I've noticed that some of the IT tests are not run from the mvn verify 
> command. These are tests that are not marked with an explicit {{@Category}} 
> or does not extend the base test classes. 
> Some example ones are: 
> {code}
> IndexHandlerIT
> ReadWriteKeyValuesWithCodecIT
> {code}
> See the lack of these tests in 
> https://builds.apache.org/view/All/job/phoenix-master/1223/consoleFull 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2736) Fix possible data loss with local indexes when there are splits during bulkload

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462792#comment-16462792
 ] 

James Taylor commented on PHOENIX-2736:
---

Still an issue, [~rajeshbabu]?

> Fix possible data loss with local indexes when there are splits during 
> bulkload
> ---
>
> Key: PHOENIX-2736
> URL: https://issues.apache.org/jira/browse/PHOENIX-2736
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
>
> Currently when there are splits during bulkload then LoadIncrementalHFiles 
> move full HFile to first daughter region instead of properly spitting the 
> HFile to two daughter region and also we may not properly replace the region 
> start key if there are merges during bulkload. To fix this we can make 
> HalfStoreFileReader configurable in LoadIncrementalHFiles and use 
> IndexHalfStoreFileReader for local indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2737) Make sure local indexes work properly after fixing region overlaps by HBCK.

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462789#comment-16462789
 ] 

James Taylor commented on PHOENIX-2737:
---

Still an issue, [~rajeshbabu]?

> Make sure local indexes work properly after fixing region overlaps by HBCK.
> ---
>
> Key: PHOENIX-2737
> URL: https://issues.apache.org/jira/browse/PHOENIX-2737
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
>
> When there are region overlaps hbck fix by moving hfiles of overlap regions 
> to new region of common key of overlap regions. Then we might not properly 
> replace region start key in HFiles in that case.  In this case we don't have 
> any relation of parent child region in hbase:meta so we cannot identify the 
> start key   in HFiles. To fix this we need to add separator after region 
> start key so that we can easily identify start key in HFile without always 
> touching hbase:meta. So when we create scanners for the Storefiles we can 
> check the region start key in hfile with region start key and if any change 
> we can just replace the old start key with current region start key. During 
> compaction we can properly replace the start key with actual key values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2547) Spark Data Source API: Filter operation doesn't work for column names containing a white space

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2547.
---
Resolution: Fixed

> Spark Data Source API: Filter operation doesn't work for column names 
> containing a white space
> --
>
> Key: PHOENIX-2547
> URL: https://issues.apache.org/jira/browse/PHOENIX-2547
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Suhas Nalapure
>Assignee: Josh Mahonin
>Priority: Critical
>  Labels: verify
> Fix For: 4.9.0
>
> Attachments: phoenix_spark.patch
>
>
> Dataframe.filter() results in 
> "org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "LPAREN", got "first" at line 1, 
> column 52."  when a column name has a white space in it.
> Steps to Reproduce
> --
> 1. Create a test table & insert a row as below
>create table "space" ("key" varchar primary key, "first name" varchar);
>upsert into "space" values ('key1', 'xyz');
> 2. Java code that leads to the error:
>  //omitting the DataFrame creation part
>df = df.filter(df.col("first name").equalTo("xyz"));
>   System.out.println(df.collectAsList());
> 3. I could see the following statements in the Phoenix logs which may have 
> led to the exception (stack trace given below)
> 2015-12-28 17:52:24,327 INFO  [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> UseSelectColumns=true, selectColumnList.size()=2, selectColumnList=key,first 
> name 
> 2015-12-28 17:52:24,328 INFO  [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> Select Statement: SELECT "key","0"."first name" FROM "space" WHERE ( first 
> name = 'xyz')
> 2015-12-28 17:52:24,333 ERROR [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> Failed to get the query plan with error [ERROR 604 (42P00): Syntax error. 
> Mismatched input. Expecting "LPAREN", got "first" at line 1, column 52.]
> Exception Stack Trace:
> --
> java.lang.RuntimeException: 
> org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "LPAREN", got "first" at line 1, 
> column 52.
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:125)
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:80)
>   at 
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.phoenix.spark.PhoenixRDD.getPartitions(PhoenixRDD.scala:48)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.

[jira] [Updated] (PHOENIX-2547) Spark Data Source API: Filter operation doesn't work for column names containing a white space

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2547:
--
Fix Version/s: (was: 4.14.0)
   4.9.0

> Spark Data Source API: Filter operation doesn't work for column names 
> containing a white space
> --
>
> Key: PHOENIX-2547
> URL: https://issues.apache.org/jira/browse/PHOENIX-2547
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Suhas Nalapure
>Assignee: Josh Mahonin
>Priority: Critical
>  Labels: verify
> Fix For: 4.9.0
>
> Attachments: phoenix_spark.patch
>
>
> Dataframe.filter() results in 
> "org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "LPAREN", got "first" at line 1, 
> column 52."  when a column name has a white space in it.
> Steps to Reproduce
> --
> 1. Create a test table & insert a row as below
>create table "space" ("key" varchar primary key, "first name" varchar);
>upsert into "space" values ('key1', 'xyz');
> 2. Java code that leads to the error:
>  //omitting the DataFrame creation part
>df = df.filter(df.col("first name").equalTo("xyz"));
>   System.out.println(df.collectAsList());
> 3. I could see the following statements in the Phoenix logs which may have 
> led to the exception (stack trace given below)
> 2015-12-28 17:52:24,327 INFO  [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> UseSelectColumns=true, selectColumnList.size()=2, selectColumnList=key,first 
> name 
> 2015-12-28 17:52:24,328 INFO  [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> Select Statement: SELECT "key","0"."first name" FROM "space" WHERE ( first 
> name = 'xyz')
> 2015-12-28 17:52:24,333 ERROR [main] 
> org.apache.phoenix.mapreduce.PhoenixInputFormat
> Failed to get the query plan with error [ERROR 604 (42P00): Syntax error. 
> Mismatched input. Expecting "LPAREN", got "first" at line 1, column 52.]
> Exception Stack Trace:
> --
> java.lang.RuntimeException: 
> org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): 
> Syntax error. Mismatched input. Expecting "LPAREN", got "first" at line 1, 
> column 52.
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:125)
>   at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:80)
>   at 
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.phoenix.spark.PhoenixRDD.getPartitions(PhoenixRDD.scala:48)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>   at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>   at scala.Option.getOrElse(Option.scala:120)
>   at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scal

[jira] [Resolved] (PHOENIX-2513) Pherf - IllegalArgumentException during data load

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2513.
---
Resolution: Cannot Reproduce

> Pherf - IllegalArgumentException during data load
> -
>
> Key: PHOENIX-2513
> URL: https://issues.apache.org/jira/browse/PHOENIX-2513
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Priority: Minor
> Fix For: 4.14.0
>
>
> {code}
> Caused by: java.lang.IllegalArgumentException: Requested random string length 
> -1 is less than 0.
>   at 
> org.apache.commons.lang.RandomStringUtils.random(RandomStringUtils.java:231)
>   at 
> org.apache.commons.lang.RandomStringUtils.random(RandomStringUtils.java:166)
>   at 
> org.apache.commons.lang.RandomStringUtils.random(RandomStringUtils.java:146)
>   at 
> org.apache.commons.lang.RandomStringUtils.randomAlphanumeric(RandomStringUtils.java:114)
>   at 
> org.apache.phoenix.pherf.rules.RulesApplier.getSequentialDataValue(RulesApplier.java:373)
>   at 
> org.apache.phoenix.pherf.rules.RulesApplier.getDataValue(RulesApplier.java:155)
>   at 
> org.apache.phoenix.pherf.rules.RulesApplier.getDataForRule(RulesApplier.java:99)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload.buildStatement(WriteWorkload.java:317)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$700(WriteWorkload.java:53)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:268)
>   at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:249)
> {code}
> [~cody.mar...@gmail.com] Any idea for this exception that happened after 
> 100M+ rows during data load?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2456) StaleRegionBoundaryCacheException on query with stats

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2456.
---
Resolution: Cannot Reproduce

> StaleRegionBoundaryCacheException on query with stats
> -
>
> Key: PHOENIX-2456
> URL: https://issues.apache.org/jira/browse/PHOENIX-2456
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Minor
>  Labels: verify
> Fix For: 4.14.0
>
>
> {code}org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.{code}
> Got this exception after data load and is persistent even after client 
> restart and no split activity on server. However query works fine after stats 
> table is truncated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2438) select * from table returns fraction of rows

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2438.
---
Resolution: Cannot Reproduce

> select * from table returns fraction of rows
> 
>
> Key: PHOENIX-2438
> URL: https://issues.apache.org/jira/browse/PHOENIX-2438
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
> Environment: Development
>Reporter: Badam Srinivas Praveen
>Priority: Minor
>  Labels: verify
> Fix For: 4.14.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Mismatch in number of rows while selecting table.
> select * from table fetches only 305 rows while select count(1) from table 
> gives 52528 rows.
>  select count(1) from tbl_recipe;
> +--+
> | COUNT(1) |
> +--+
> | 52528|
> +--+
> select * from tbl_recipe;
> +--+--+--+--+
> | 300528   | XLSiP700C600T31  
> | EPI\XLSiP700C600T31.xml  | 9
> |
> | 300532   | SSiGe-SiH4-C4gd2 
> | EpiXP\SSiGe-SiH4-C4gd2.xml   | 9
> |
> | 300536   | SSiGe-SiH4-C09a8 
> | Epi4\SSiGe-SiH4-C09a8.xml| 9
> |
> | 300540   | Lrrr_3_65
> | SiCoNi\Lrrr_3_65.xml | 9
> |
> | 300545   | ZZHCLBAK10TORA   
> | Epi4\ZZHCLBAK10TORA.xml  | 9
> |
> | 300549   | GB_SiGe20_713
> | Epi4\GB_SiGe20_713.xml   | 9
> |
> +--+--+--+--+
> 305 rows selected (4.611 seconds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2308) Improve secondary index resiliency

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2308.
---
   Resolution: Fixed
 Assignee: Ravi Kishore Valeti
Fix Version/s: (was: 4.14.0)
   4.13.0

> Improve secondary index resiliency
> --
>
> Key: PHOENIX-2308
> URL: https://issues.apache.org/jira/browse/PHOENIX-2308
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Ravi Kishore Valeti
>Priority: Major
> Fix For: 4.13.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1833) Optionally trigger MR job to build index automatically

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1833:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-2308)

> Optionally trigger MR job to build index automatically
> --
>
> Key: PHOENIX-1833
> URL: https://issues.apache.org/jira/browse/PHOENIX-1833
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Ravi Kishore Valeti
>Priority: Major
>
> Follow on work from PHOENIX-1609. This is to have the Phoenix client 
> automatically kick off the MR index build for the ASYNC case. We can put in 
> behind a config option so that folks who'd rather start the MR job manually 
> can still do so.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2272) Unable to run count(1), count(*) or select explicit columns queries on a table with a secondary index

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2272.
---
Resolution: Cannot Reproduce

> Unable to run count(1), count(*) or select explicit columns queries on a 
> table with a secondary index
> -
>
> Key: PHOENIX-2272
> URL: https://issues.apache.org/jira/browse/PHOENIX-2272
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.1
> Environment: HBase1.0, CDH5.4.5
>Reporter: Kumar Palaniappan
>Priority: Major
>  Labels: verify
> Fix For: 4.14.0
>
>
> Unable to run select count(1) from table_name or select count(*) from 
> table_name or select "id1" from table_name when there is a secondary index on 
> a column in that table. It returns 0 records.
> But select * from table_name works. 
> select count(*) from r3.ads;
> +--+
> | COUNT(1) |
> +--+
> | 0|
> +--+
> 1 row selected (0.104 seconds)
> 0: jdbc:phoenix:labs-**> select count(1) from r3.ads;
> +--+
> | COUNT(1) |
> +--+
> | 0|
> +--+
> select count(distinct("cstId")) from r3.ads;
> +--+
> | DISTINCT_COUNT("cstId")  |
> +--+
> | 0|
> +--+
> 1 row selected (0.114 seconds)
>  jdbc:phoenix:labs-**> select /*+ NO_INDEX */count(1) from r3.ads;
> +--+
> | COUNT(1) |
> +--+
> | 1617732  |
> +--+
> 1 row selected (2.007 seconds)
> If I force with a hint NO_INDEX works.
> Is that related to somewhat this one? 
> https://issues.apache.org/jira/browse/PHOENIX-1203



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-2265) Disallow creation of view over HBase table if PK not specified

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-2265:
-

Assignee: (was: NIDHI GAMBHIR)

> Disallow creation of view over HBase table if PK not specified
> --
>
> Key: PHOENIX-2265
> URL: https://issues.apache.org/jira/browse/PHOENIX-2265
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>  Labels: newbie
> Fix For: 4.14.0
>
>
> We currently allow a Phoenix view to be defined over an HBase table without 
> specifying a primary key.
> To repro, create an HBase table in the HBase shell:
> {code}
>  create 'hb1', 'f1'
> {code}
> Then create a view in Phoenix:
> {code}
> create view "hb1"("f1".a varchar);
> {code}
> This should yield an error, as we haven't specified a primary key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2215) Updated Phoenix Causes Schema/Data Migration that Creates Invalid Table Metadata

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2215.
---
Resolution: Cannot Reproduce

> Updated Phoenix Causes Schema/Data Migration that Creates Invalid Table 
> Metadata
> 
>
> Key: PHOENIX-2215
> URL: https://issues.apache.org/jira/browse/PHOENIX-2215
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Chris Hill
>Priority: Major
> Fix For: 4.14.0
>
>
> Using Build #819 on Phoenix 4.x-HBase-0.98
> When I updated the Phoenix server and client in our Hadoop/Hbase cluster some 
> invalid table metadata was populated in the SYSETM.CATALOG.  These rows seem 
> to make it impossible to update or delete the tables as an error about 
> invalid metadata occurs.
> Here are the details.  After the updating the Phoenix jars (client and 
> server) I see the following.
> Run: !tables
> I get two rows for tables, that previously weren't there, that have no 
> TABLE_TYPE specified.  (they are the only rows like that)
> Run: SELECT * FROM SYSTEM.CATALOG;
> I get two rows only for the tables, again the TABLE_TYPE is not specified, 
> the PK_NAME is blank and have no rows with COLUMN_NAMEs or COLUMN_FAMILYs.  
> These seem to be the only rows in the table with these characteristics.
> As mentioned, the real problem that led me to file the issue, was that table 
> can not be changed. (I was trying to update it.)  If you try to delete it you 
> get the following error:
> 15/08/27 12:58:05 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: TABLE_NAME: Didn't find 
> expected key values for table row in metadata row
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1422)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11629)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3420)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3402)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Didn't find expected key values 
> for table row in metadata row
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:732)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:468)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:1442)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1396)
> ... 10 more
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1614)
> at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93)
> at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:115)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:91)
> at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChanne

[jira] [Resolved] (PHOENIX-2000) DatabaseMetaData.getTables fails to return table list

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2000.
---
Resolution: Cannot Reproduce

> DatabaseMetaData.getTables fails to return table list
> -
>
> Key: PHOENIX-2000
> URL: https://issues.apache.org/jira/browse/PHOENIX-2000
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.1
> Environment: HDP 2.2 with hbase 4.3.1 server
>Reporter: Bilal Nemutlu
>Priority: Critical
>  Labels: verify
> Fix For: 4.14.0
>
>
>  DatabaseMetaData md = conn.getMetaData();
>   ResultSet rst = md.getTables(null, null, null, null);
>   
>   while (rst.next()) {
> System.out.println(rst.getString(1));
>   }
> Throws the following error
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> SYSTEM.CATALOG,,1432187115973.c970b53a96db5a8c1d958ac920bc45d5.: Could not 
> initialize class org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:200)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1663)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3093)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.monitoring.PhoenixMetrics$SizeMetric.update(PhoenixMetrics.java:59)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:95)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:102)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:108)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(ScanRegionObserver.java:232)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:219)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:173)
>   ... 9 more
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> SYSTEM.CATALOG,,1432187115973.c970b53a96db5a8c1d958ac920bc45d5.: Could not 
> initialize class org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:200)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1663)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3093)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.monitoring.PhoenixMetrics
>   at 
> org.apache.phoenix.monitoring.PhoenixMetrics$SizeMetric.update(PhoenixMetrics.java:59)
>   at 
> org.apache.phoenix.memory.GlobalMemoryManager.

[jira] [Commented] (PHOENIX-2023) Build tgz only on release profile

2018-05-03 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462815#comment-16462815
 ] 

James Taylor commented on PHOENIX-2023:
---

Still an issue, [~mujtabachohan]?

> Build tgz only on release profile
> -
>
> Key: PHOENIX-2023
> URL: https://issues.apache.org/jira/browse/PHOENIX-2023
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Mujtaba Chohan
>Priority: Major
>  Labels: beginner
> Fix For: 4.14.0
>
>
> We should follow [~enis]'s lead on HBASE-13816 and save everyone some time on 
> the build cycle by moving some (all?) of the assembly bits to a release 
> profile that's only invoked at RC time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3099) Update to Sqlline 1.1.10

2018-05-03 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-3099.
-
   Resolution: Won't Fix
Fix Version/s: (was: 4.14.0)

Ya, right you are. Looks like ther's a 1.3.0, but will leave that for the next 
person ;) (seriously though, not much changed over 1.2.0)

> Update to Sqlline 1.1.10
> 
>
> Key: PHOENIX-3099
> URL: https://issues.apache.org/jira/browse/PHOENIX-3099
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Priority: Major
>
> One of the bugfixes that sqlline 1.1.10 will likely include is a fix for 
> running SQL files which start with a comment. We should try to push for a 
> release and then upgrade Phoenix to use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1342) Evaluate array length at regionserver coprocessor

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-1342:
-

Assignee: (was: ramkrishna.s.vasudevan)

> Evaluate array length at regionserver coprocessor
> -
>
> Key: PHOENIX-1342
> URL: https://issues.apache.org/jira/browse/PHOENIX-1342
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vaclav Loffelmann
>Priority: Minor
> Fix For: 4.14.0
>
>
> Length of an array should be evaluated on server site to prevent network 
> traffic on big arrays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1347) Unit tests fail if default locale is not en_US, at SortOrderExpressionTest.toChar

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-1347:
-

Assignee: (was: Samarth Jain)

> Unit tests fail if default locale is not en_US, at 
> SortOrderExpressionTest.toChar
> -
>
> Key: PHOENIX-1347
> URL: https://issues.apache.org/jira/browse/PHOENIX-1347
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sang-Jin, Park
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-1347-v2.patch, PHOENIX-1347.patch
>
>
> Failed tests: 
>   
> SortOrderExpressionTest.toChar:148->evaluateAndAssertResult:308->evaluateAndAssertResult:318
>  expected:<12/11/01 12:00 [AM]> but was:<12/11/01 12:00 [오전]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-672:
-
Issue Type: Improvement  (was: Task)

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672.addendum-5.x.patch, 
> PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3941) Filter regions to scan for local indexes based on data table leading pk filter conditions

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3941:
--
Issue Type: Improvement  (was: Bug)

> Filter regions to scan for local indexes based on data table leading pk 
> filter conditions
> -
>
> Key: PHOENIX-3941
> URL: https://issues.apache.org/jira/browse/PHOENIX-3941
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, localIndex
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3941_v1.patch, PHOENIX-3941_v2.patch, 
> PHOENIX-3941_v3.patch, PHOENIX-3941_v4.patch
>
>
> Had a good offline conversation with [~ndimiduk] at PhoenixCon about local 
> indexes. Depending on the query, we can often times prune the regions we need 
> to scan over based on the where conditions against the data table pk. For 
> example, with a multi-tenant table, we only need to scan the regions that are 
> prefixed by the tenant ID.
> We can easily get this information from the compilation of the query against 
> the data table (which we always do), through the 
> statementContext.getScanRanges() structure. We'd just want to keep a pointer 
> to the data table QueryPlan from the local index QueryPlan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] Release of Apache Phoenix 4.14.0 RC0

2018-05-03 Thread James Taylor
Hello Everyone,

This is a call for a vote on Apache Phoenix 4.14.0 RC0. This is a patch
release of Phoenix 4.14 and is compatible with Apache HBase 0.98, 1.1, 1.2,
1.3 and CDH 5.11, 5.12, 5.13, and 5.14. The release includes both a
source-only release and a convenience binary release for each supported
HBase version.

This release has feature parity with supported HBase versions and includes
the following improvements:
- Over 90 bug fixes
- Support for GRANT/REVOKE [1][2]
- Avoid server retries for mutable indexes [3]
- Pure client side transactional index maintenance [4]
- Prune local index regions scanned during query execution [5][6]
- Support NOT NULL constraint for any column for immutable table [7]

The source tarball, including signatures, digests, etc can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-0.98-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-1.1-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-1.2-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-1.3-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.11.2-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.12.2-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.13.2-rc0/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.14.2-rc0/src/

The binary artifacts can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-0.98-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-1.1-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-1.2-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-HBase-1.3-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.11.2-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.12.2-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.13.2-rc0/bin/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.14.2-rc0/bin/

The binary parcels for CDH can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.11.2-rc0/parcels/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.12.2-rc0/parcels/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.13.2-rc0/parcels/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.0-cdh5.14.2-rc0/parcels/

For a complete list of changes, see:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120&version=12342145

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/mujtaba.asc ( HBase-x.x versions )
https://people.apache.org/keys/committer/pboado.asc ( cdh5.x versions )
https://dist.apache.org/repos/dist/dev/phoenix/KEYS

The hash and tag to be voted upon:
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3b45df9990648f466b987a7576e9a40974fe8c2f
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.14.0-HBase-0.98-rc0
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=190e768a671e7f1ada4c7efc28bdb1cc7db6234c
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.14.0-HBase-1.1-rc0
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=6d994b02b51ecc09e1284d11f025ccd9193b7c06
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.14.0-HBase-1.2-rc0
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=166d159674155c357203af389b6ac1d0e6842b75
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.14.0-HBase-1.3-rc0
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=8ed7eb0f5878202d744dd3d7a08235a45adfd3bc
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/tags/v4.14.0-cdh5.11.2-rc0
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a769d8ab81618ae981b095b8cdead6bff63eb627
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/tags/v4.14.0-cdh5.12.2-rc0
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=e3cfd8dae6a780c707f7d380e932f1f82d3bc702
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/tags/v4.14.0-cdh5.13.2-rc0

https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=ebceebdd11d097e5e3697ab8c7f565406b55443a
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/tags/v4.14.0-cdh5.14.2-rc0

Vote will be open for at least 72 hours. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team

[1] https://phoenix.apache.org/language/index.html#grant
[2] https://phoenix.apache.org/language/index.html#revoke
[3] https://issues.apache.org/jira/

[jira] [Commented] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2018-05-03 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462931#comment-16462931
 ] 

Ankit Singhal commented on PHOENIX-3176:


Yes [~jamestaylor] , it is now fixed. Thanks for the ping. 

Do you know from which version we started using latest timestamp? so that I 
marked this Jira closed accordingly.

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2018-05-03 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462940#comment-16462940
 ] 

Thomas D'Silva commented on PHOENIX-3534:
-

[~jamestaylor]

Currently if you change a table property on a base table that is valid on a 
view we propagate it to all child views. If the table property is not mutable 
on the child view we set it to the parent view value. If it is mutable and the 
view didn't change the property we set it to the parent view value. 

After splittable system catalog since we resolve the parent hierarchy of a view 
while combining columns we can also inherit the table properties from the base 
table. If a table property is valid on a view but not mutable on a view we can 
just use the base table value. If it is mutable on a view, we don't know if the 
property was changed on the view on not, so we have to use the value that is 
set on the view. 

This is different from the existing behavior, where if you change a table 
property that is valid on a view and mutable on view and if it wasn't changed 
on the view, it would get propagated to all the child views. Is this ok?

> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-3534-wip.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4623) Inconsistent physical view index name

2018-05-03 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462964#comment-16462964
 ] 

Thomas D'Silva commented on PHOENIX-4623:
-

[~jamestaylor]

 

We have a bug in MetaDataUtil.getIndexPhysicalName. It returns the correctly 
physical name when namespaces are enabled eg. "SCH:_IDX_TABLE" but when 
namespaces are not enabled and the table has a schema it returns 
"_IDX_SCH.TABLE". This is the table name used to create the view index table 
for existing clusters. 

MetaDataUtil.getViewIndexName returns "SCH._IDX_TABLE" which is not the name 
that was used to create the view index table. 

We can fix MetaDataUtil.getIndexPhysicalName to return the "SCH._IDX_TABLE" , 
but we would also have to rename the existing view index tables (by taking a 
snapshot). 

Or we could just change MetaDataUtil.getViewIndexName to return 
"_IDX_SCH.TABLE". WDYT?

> Inconsistent physical view index name
> -
>
> Key: PHOENIX-4623
> URL: https://issues.apache.org/jira/browse/PHOENIX-4623
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Akshita Malhotra
>Priority: Major
>  Labels: easyfix
> Fix For: 4.15.0
>
>
> The physical view indexes are incorrectly named when table has a schema. For 
> instance, if a table name is "SCH.TABLE", during creation the physical index 
> table is named as "_IDX_SCH.TABLE" which doesn't look right. In case 
> namespaces are enabled, the physical index table is named as "SCH:_IDX_TABLE"
> The client APIs on the other hand such as 
> MetaDataUtil.getViewIndexName(String schemaName, String tableName) API to 
> retrieve the phyisical view index name returns "SCH._IDX_TABLE" which as per 
> convention returns the right name but functionally leads to wrong results as 
> this is not how the physical indexes are named during construction.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3919) Add hbase-hadoop2-compat as compile time dependency

2018-05-03 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16462978#comment-16462978
 ] 

Josh Elser commented on PHOENIX-3919:
-

Strikes me as strange – I'm not sure what Phoenix would be directly depending 
on in that module..

> Add hbase-hadoop2-compat as compile time dependency
> ---
>
> Key: PHOENIX-3919
> URL: https://issues.apache.org/jira/browse/PHOENIX-3919
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3819.patch
>
>
> HBASE-17448 added hbase-hadoop2-compat as a required dependency for clients, 
> but it is currently a test only dependency in some Phoenix modules.
> Make it an explicit compile time dependency in those modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: last call for changes for 4.14

2018-05-03 Thread Josh Elser
Thanks! Adding these to my list. Will get for 4.15 since you don't see 
as blocker :)


On 5/2/18 7:42 PM, James Taylor wrote:

PHOENIX-4298 was back ported, but PHOENIX-4303 and PHOENIX-4304 still need
to be. Not mandatory for 4.14 IMHO, but would be good to help ease the
burden of porting JIRAs between 4.x and 5.x branches.

On Wed, May 2, 2018 at 2:54 PM Josh Elser  wrote:


Sorry -- I'm about to be a bit ignorant.

I recall PHOENIX-4298 was for backporting some of the deprecation fixing
from 5.x into 4.14, but I don't remember if there were more. That one
did land, but maybe Rajeshbabu was working on more?

On 4/26/18 1:51 PM, James Taylor wrote:

Last call for getting changes into 4.14. Please let me know if you need

to

get anything in before we cut the RCs.

Thanks,
James







[jira] [Resolved] (PHOENIX-3176) Rows will be skipped which are having future timestamp in row_timestamp column

2018-05-03 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3176.
---
   Resolution: Fixed
 Assignee: James Taylor
Fix Version/s: (was: 4.15.0)
   4.12.0

> Rows will be skipped which are having future timestamp in row_timestamp column
> --
>
> Key: PHOENIX-3176
> URL: https://issues.apache.org/jira/browse/PHOENIX-3176
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Ankit Singhal
>Assignee: James Taylor
>Priority: Major
>  Labels: newbie
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3176.patch
>
>
> Rows will be skipped when row_timestamp have future timestamp
> {code}
> : jdbc:phoenix:localhost> CREATE TABLE historian.data (
> . . . . . . . . . . . . .> assetid unsigned_int not null,
> . . . . . . . . . . . . .> metricid unsigned_int not null,
> . . . . . . . . . . . . .> ts timestamp not null,
> . . . . . . . . . . . . .> val double
> . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts 
> row_timestamp))
> . . . . . . . . . . . . .> IMMUTABLE_ROWS=true;
> No rows affected (1.283 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2015-01-01',1.2);
> 1 row affected (0.047 seconds)
> 0: jdbc:phoenix:localhost> upsert into historian.data 
> values(1,2,'2018-01-01',1.2);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from historian.data;
> +--+---+--+--+
> | ASSETID  | METRICID  |TS| VAL  |
> +--+---+--+--+
> | 1| 2 | 2015-01-01 00:00:00.000  | 1.2  |
> +--+---+--+--+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:localhost> select count(*) from historian.data;
> +---+
> | COUNT(1)  |
> +---+
> | 1 |
> +---+
> 1 row selected (0.013 seconds)
> {code}
> Explain plan, where scan range is capped to compile time.
> {code}
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA  |
> | ROW TIMESTAMP FILTER [0, 1470901929982)  |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | SERVER AGGREGATE INTO SINGLE ROW |
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4688) Add kerberos authentication to python-phoenixdb

2018-05-03 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463103#comment-16463103
 ] 

Josh Elser commented on PHOENIX-4688:
-

{quote}This is perhaps something we can name requests-kerberos-phoenix and will 
be installed into the specified runtime/virtualenv at install time.
{quote}
This sounds like the easiest route to me.

> Add kerberos authentication to python-phoenixdb
> ---
>
> Key: PHOENIX-4688
> URL: https://issues.apache.org/jira/browse/PHOENIX-4688
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Priority: Minor
>
> In its current state python-phoenixdv does not support support kerberos 
> authentication.  Using a modern python http library such as requests or 
> urllib it would be simple (if not trivial) to add this support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-03 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-4724:
-

Assignee: Vincent Poon

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-03 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-4724:
-

 Summary: Efficient Equi-Depth histogram for streaming data
 Key: PHOENIX-4724
 URL: https://issues.apache.org/jira/browse/PHOENIX-4724
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Vincent Poon


Equi-Depth histogram from 
http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
without the sliding window - we assume a single window over the entire data set.

Used to generate the bucket boundaries of a histogram where each bucket has the 
same # of items.

This is useful, for example, for pre-splitting an index table, by feeding in 
data from the indexed column.

Works on streaming data - the histogram is dynamically updated for each new 
value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-03 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4724:
--
Attachment: PHOENIX-4724.v1.patch

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-03 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463240#comment-16463240
 ] 

Vincent Poon commented on PHOENIX-4724:
---

[~aertoria] check it out

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-03 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463382#comment-16463382
 ] 

Ethan Wang commented on PHOENIX-4724:
-

[~vincentpoon]

If I understand correctly, with this feature implemented, when you build index 
table, you will at same time record some info into this histogram, so that in 
the future at some point you will conveniently get the distribution info of the 
index table. correct?

So do you store a histogram obj for each index table like a shadow obj some 
where off line? Also, will there every be case that you need mutate index or 
remove index from a existing index table?

Cool idea!

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4725) Hint part in a query does not take into account the "schema" which is passed in the JDBC connection string

2018-05-03 Thread Pulkit Bhardwaj (JIRA)
Pulkit Bhardwaj created PHOENIX-4725:


 Summary: Hint part in a query does not take into account the 
"schema" which is passed in the JDBC connection string
 Key: PHOENIX-4725
 URL: https://issues.apache.org/jira/browse/PHOENIX-4725
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Pulkit Bhardwaj


If I connect to hbase via phoenix using a jdbc connection and specify the 
schema name in the connection, I would be able to do use the table names 
without having to specify the schema name in the query

e.g
{code:java}
SELECT * from SCHEMA_NAME.TABLE_NAME{code}
can we written as
{code:java}
SELECT * from TABLE_NAME{code}
but
 let's say I want to pass a hint to use a particular index on the table
{code:java}
SELECT /*+ INDEX(SCHEMA_NAME.TABLE_NAME IDX_TABLE) */ * from TABLE_NAME{code}
the above works, but if I remove the SCHEMA_NAME from inside the hint part, the 
query would not reconise the index
 in other words, the below would not work
{code:java}
SELECT /*+ INDEX(TABLE_NAME IDX_TABLE) */ * from TABLE_NAME{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)