[jira] [Updated] (PHOENIX-4845) Support using Row Value Constructors in OFFSET clause to support paging in tables where the sort order of PK columns varies

2018-10-01 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4845:

Description: 
RVCs along with the LIMIT clause are useful for efficiently paging through rows 
(see [http://phoenix.apache.org/paged.html]). 
 However if the sorder order of the PK columns in a table varies we cannot use 
RVCs. 

For eg. if the PK of a table is (A  DESC, B) we cannot use the following query 
to page through the data
{code:java}
SELECT * FROM TABLE WHERE (A, B) > (?, ?) ORDER BY A DESC, B LIMIT 20
{code}
Since the rows are sorted by A desc and then by B descending we need change the 
comparison order
{code:java}
SELECT * FROM TABLE WHERE (A, B) < (?, ?) ORDER BY A DESC, B LIMIT 20
{code}
If we supported using RVCs in the offset clause we could use the offset to set 
the start row of the scan. Clients would not have to have logic to determine 
the comparison operator.
{code:java}
SELECT * FROM TABLE ORDER BY A DESC, B LIMIT 20 OFFSET (?,?)
{code}
We would only allow using the offset if the rows are ordered by the sort order 
of the PK columns.

 

FYI [~jfernando_sfdc]

  was:
RVCs along with the LIMIT clause are useful for efficiently paging through rows 
(see [http://phoenix.apache.org/paged.html]). 
 However if the sorder order of the PK columns in a table varies we cannot use 
RVCs. 

For eg. if the PK of a table is (A  DESC, B) we cannot use the following query 
to page through the data
{code:java}
SELECT * FROM TABLE WHERE (A, B) < (?, ?) ORDER BY A DESC, B LIMIT 20
{code}
Since the rows are sorted by A desc and then by B descending we need to use the 
following query
{code:java}
SELECT * FROM TABLE WHERE (A < ? OR (A=? AND B>?))  ORDER BY A DESC, B LIMIT 20
{code}
If we supported using RVCs in the offset clause we could use the offset to set 
the start row of the scan. Also clients would not have to generate a 
complicated query.
{code:java}
SELECT * FROM TABLE ORDER BY A DESC, B LIMIT 20 OFFSET (?,?)
{code}
We would only allow using the offset if the rows are ordered by the sort order 
of the PK columns.

 

FYI [~jfernando_sfdc]


> Support using Row Value Constructors in OFFSET clause to support paging in 
> tables where the sort order of PK columns varies
> ---
>
> Key: PHOENIX-4845
> URL: https://issues.apache.org/jira/browse/PHOENIX-4845
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Priority: Major
>  Labels: DESC, SFDC
>
> RVCs along with the LIMIT clause are useful for efficiently paging through 
> rows (see [http://phoenix.apache.org/paged.html]). 
>  However if the sorder order of the PK columns in a table varies we cannot 
> use RVCs. 
> For eg. if the PK of a table is (A  DESC, B) we cannot use the following 
> query to page through the data
> {code:java}
> SELECT * FROM TABLE WHERE (A, B) > (?, ?) ORDER BY A DESC, B LIMIT 20
> {code}
> Since the rows are sorted by A desc and then by B descending we need change 
> the comparison order
> {code:java}
> SELECT * FROM TABLE WHERE (A, B) < (?, ?) ORDER BY A DESC, B LIMIT 20
> {code}
> If we supported using RVCs in the offset clause we could use the offset to 
> set the start row of the scan. Clients would not have to have logic to 
> determine the comparison operator.
> {code:java}
> SELECT * FROM TABLE ORDER BY A DESC, B LIMIT 20 OFFSET (?,?)
> {code}
> We would only allow using the offset if the rows are ordered by the sort 
> order of the PK columns.
>  
> FYI [~jfernando_sfdc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-01 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4941:
---
Attachment: PHOENIX-4941.patch

> Handle TableExistsException when wrapped under RemoteException for 
> SYSTEM.MUTEX table
> -
>
> Key: PHOENIX-4941
> URL: https://issues.apache.org/jira/browse/PHOENIX-4941
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4941.patch
>
>
> {code}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
>   ... 4 more
> Caused by: org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:359)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:347)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallab

[jira] [Created] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-01 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4941:
--

 Summary: Handle TableExistsException when wrapped under 
RemoteException for SYSTEM.MUTEX table
 Key: PHOENIX-4941
 URL: https://issues.apache.org/jira/browse/PHOENIX-4941
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.15.0, 5.1.0


{code}
Caused by: java.sql.SQLException: org.apache.hadoop.hbase.TableExistsException: 
org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at 
com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
... 4 more
Caused by: org.apache.hadoop.hbase.TableExistsException: 
org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:359)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:347)
at 
org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3079)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3071)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.

[jira] [Resolved] (PHOENIX-4875) Don't acquire a mutex while dropping a table and while creating a view

2018-10-01 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-4875.
-
   Resolution: Fixed
Fix Version/s: 5.1.0
   4.15.0

[~ckulkarni]
Thanks for the patch !

> Don't acquire a mutex while dropping a table and while creating a view
> --
>
> Key: PHOENIX-4875
> URL: https://issues.apache.org/jira/browse/PHOENIX-4875
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>
> Acquiring this mutex will slow down view creation and is not required.
> This was done to prevent a base table being dropped while creating a view at 
> the same time. However even if this happens the next time a view is resolved 
> the user will get a TableNotFoundException. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4917) ClassCastException when projecting array elements in hash join

2018-10-01 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4917:

Priority: Blocker  (was: Major)

> ClassCastException when projecting array elements in hash join
> --
>
> Key: PHOENIX-4917
> URL: https://issues.apache.org/jira/browse/PHOENIX-4917
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1
>Reporter: Gerald Sangudi
>Assignee: Gerald Sangudi
>Priority: Blocker
> Fix For: 4.15.0, 4.14.1
>
> Attachments: PHOENIX-4917.patch, PHOENIX-4917.patch, 
> PHOENIX-4917.patch
>
>
> This bug was introduced in the fix for PHOENIX-4791.
> When projecting array elements in hash join, we now generate both 
> ProjectedTupleValue and MultiKeyTupleValue. Before the fix for PHOENIX-4791, 
> hash join was only generating ProjectedTupleValue, and there were two lines 
> of code with class casts that reflected this assumption. The fix is to handle 
> both ProjectedTupleValue and MultiKeyTupleValue, while continuing to 
> propagate the array cell as in PHOENIX-4791.
>  
> The stack trace with the ClassCastException:
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>  org.apache.hadoop.hbase.DoNotRetryIOException: 
> GENO_DOSE,,1537598769044.1a6cb8853b036c59e7515d8e876e28c5.: 
> org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
> org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:300)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2633)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2837)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.lang.ClassCastException: 
> org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
> org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:220)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:294)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4935) IndexTool should use empty catalog instead of null

2018-10-01 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-4935:
-
Attachment: PHOENIX-4935.patch

> IndexTool should use empty catalog instead of null
> --
>
> Key: PHOENIX-4935
> URL: https://issues.apache.org/jira/browse/PHOENIX-4935
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Geoffrey Jacoby
>Priority: Major
> Attachments: PHOENIX-4935.patch
>
>
> Same issue as PHOENIX-3907 but with the IndexTool



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4940) IndexTool should be able to rebuild tenant-owned indexes

2018-10-01 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created PHOENIX-4940:


 Summary: IndexTool should be able to rebuild tenant-owned indexes
 Key: PHOENIX-4940
 URL: https://issues.apache.org/jira/browse/PHOENIX-4940
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.14.0
Reporter: Geoffrey Jacoby


IndexTool uses global connections to lookup the indexes which it's asked to 
rebuild, which means that it won't be able to see indexes owned by tenant 
views. We should add an optional tenantId parameter to it that will use a 
tenant connection (and potentially our MapReduce framework's tenant connection 
support) to allow for rebuilding those indexes as well. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4939) It takes over 13 seconds to create a local index on an empty table

2018-10-01 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4939:
---
Description: 
{{create local index local1 on test(v2);}}

No rows affected (13.216 seconds)

 

{{create index global1 on test(v2);}}

No rows affected (6.274 seconds)

  was:
{{create local index local1 on test(v2);}}

No rows affected (13.216 seconds)

 

{{create local global1 on test(v2);}}

No rows affected (6.274 seconds)


> It takes over 13 seconds to create a local index on an empty table
> --
>
> Key: PHOENIX-4939
> URL: https://issues.apache.org/jira/browse/PHOENIX-4939
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Major
>
> {{create local index local1 on test(v2);}}
> No rows affected (13.216 seconds)
>  
> {{create index global1 on test(v2);}}
> No rows affected (6.274 seconds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4939) It takes over 13 seconds to create a local index on an empty table

2018-10-01 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-4939:
--

 Summary: It takes over 13 seconds to create a local index on an 
empty table
 Key: PHOENIX-4939
 URL: https://issues.apache.org/jira/browse/PHOENIX-4939
 Project: Phoenix
  Issue Type: Improvement
Reporter: Lars Hofhansl


{{create local index local1 on test(v2);}}

No rows affected (13.216 seconds)

 

{{create local global1 on test(v2);}}

No rows affected (6.274 seconds)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4938) SELECT compilation failure when a local index is present

2018-10-01 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4938:
---
Description: 
{{create table test (pk integer primary key, v1 float, v2 float, v3 integer);}}
{{create local index local1 on test(v2);}}
Now
{{select count\(*) from test t1, test t2 where t1.v1 = t2.v1 and t2.v2 < 
0.001;}}
will throw
{code}
Error: ERROR 1012 (42M03): Table undefined. tableName=T2 (state=42M03,code=1012)
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=T2
at 
org.apache.phoenix.compile.FromCompiler$MultiTableColumnResolver.resolveTable(FromCompiler.java:869)
at 
org.apache.phoenix.compile.FromCompiler$ProjectedTableColumnResolver.resolveColumn(FromCompiler.java:1029)
at 
org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.resolveColumn(ProjectionCompiler.java:621)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visit(ProjectionCompiler.java:628)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visit(ProjectionCompiler.java:585)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:564)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:219)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:295)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:230)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:155)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:189)
at 
org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:111)
at 
org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:97)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:309)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
at sqlline.Commands.execute(Commands.java:822)
{code}

This will not happen when there is no local index defined.

  was:
{{create table test (pk integer primary key, v1 float, v2 float, v3 integer);}}
{{create local index local1 on test(v2);}}
Now
{{select count(*) from test t1, test t2 where t1.v1 = t2.v1 and t2.v2 < 0.001;}}
will throw
{code}
Error: ERROR 1012 (42M03): Table undefined. tableName=T2 (state=42M03,code=1012)
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=T2
at 
org.apache.phoenix.compile.FromCompiler$MultiTableColumnResolver.resolveTable(FromCompiler.java:869)
at 
org.apache.phoenix.compile.FromCompiler$ProjectedTableColumnResolver.resolveColumn(FromCompiler.java:1029)
at 
org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.resolveColumn(ProjectionCompiler.java:621)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visit(ProjectionCompiler.java:628)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visit(ProjectionCompiler.java:585)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:564)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:219)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:295)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:230)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:155)
at 
o

[jira] [Created] (PHOENIX-4938) SELECT compilation failure when a local index is present

2018-10-01 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-4938:
--

 Summary: SELECT compilation failure when a local index is present
 Key: PHOENIX-4938
 URL: https://issues.apache.org/jira/browse/PHOENIX-4938
 Project: Phoenix
  Issue Type: Improvement
Reporter: Lars Hofhansl


{{create table test (pk integer primary key, v1 float, v2 float, v3 integer);}}
{{create local index local1 on test(v2);}}
Now
{{select count(*) from test t1, test t2 where t1.v1 = t2.v1 and t2.v2 < 0.001;}}
will throw
{code}
Error: ERROR 1012 (42M03): Table undefined. tableName=T2 (state=42M03,code=1012)
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=T2
at 
org.apache.phoenix.compile.FromCompiler$MultiTableColumnResolver.resolveTable(FromCompiler.java:869)
at 
org.apache.phoenix.compile.FromCompiler$ProjectedTableColumnResolver.resolveColumn(FromCompiler.java:1029)
at 
org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.resolveColumn(ProjectionCompiler.java:621)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visit(ProjectionCompiler.java:628)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visit(ProjectionCompiler.java:585)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:564)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:219)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:295)
at 
org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:230)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:155)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:189)
at 
org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:111)
at 
org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:97)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:309)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
at sqlline.Commands.execute(Commands.java:822)
{code}

This will not happen when there is no local index defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4935) IndexTool should use empty catalog instead of null

2018-10-01 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-4935:


Assignee: Geoffrey Jacoby

> IndexTool should use empty catalog instead of null
> --
>
> Key: PHOENIX-4935
> URL: https://issues.apache.org/jira/browse/PHOENIX-4935
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> Same issue as PHOENIX-3907 but with the IndexTool



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4933) DELETE FROM throws NPE when a local index is present

2018-10-01 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-4933.

   Resolution: Fixed
Fix Version/s: 5.1.0
   4.14.1
   4.15.0

Committed to master, 4.x-HBase-1.4, 4.x-HBase-1.3, and 4.x-HBase-1.2


> DELETE FROM throws NPE when a local index is present
> 
>
> Key: PHOENIX-4933
> URL: https://issues.apache.org/jira/browse/PHOENIX-4933
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 4.14.1, 5.1.0
>
> Attachments: PHOENIX-4933-4.x-HBase-1.4.patch, PHOENIX-4933-test.txt
>
>
> Just ran into this. When a local index is present. DELETE FROM  throws 
> the following NPE:
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1537573236513.ef4b34358717193907bddb3a5bec3b26.: null
>  at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
>  at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
>  at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:195)
>  at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:557)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:239)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:287)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3130)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36359)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2369)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  Caused by: java.lang.NullPointerException
>  at 
> org.apache.phoenix.execute.TupleProjector.projectResults(TupleProjector.java:283)
>  at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:185)
>  ... 10 more (state=08000,code=101)
> It fails here:
> {{long maxTS = tuple.getValue(0).getTimestamp();}}, because 
> {{tuple.getValue(0)}} returns null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Phoenix developer Meetup

2018-10-01 Thread la...@apache.org
 10 people signed up so far.This is a good chance to make your voice heard, 
give input, and help point the project in the right direction going forward.
-- Lars

On Friday, September 28, 2018, 10:39:41 AM PDT, la...@apache.org 
 wrote:  
 
 Hi all,
I'm planning to put together a Phoenix developer meetup at the Salesforce 
office (with video conference for those who cannot attend in person) in the 
next few weeks.
If you're interested please put your name in this 
spreadsheet:https://docs.google.com/spreadsheets/d/1j4QSk53B0ZIl_qq1XX3oB2f-Tyqp93ilJYZKG84_Amg/edit?usp=sharing

This is a chance to get all those who contribute to Phoenix together. There 
will also be food. :)I will leave the spreadsheet up for one week - until 
Friday October 5th.
Possible agenda:- Round-table- Status of 4.x and master branches- Current 
challenges (public cloud?, performance?, community?, SQL coverage?, scale?)- 
Future direction. Where do we want Phoenix to be in 1 years, 2 years, 5 years?- 
more...
Thanks.
-- Lars
  

[jira] [Resolved] (PHOENIX-4827) Modify TAL to use Table instead of HTableInterface

2018-10-01 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4827.
---
Resolution: Fixed

> Modify TAL to use Table instead of HTableInterface
> --
>
> Key: PHOENIX-4827
> URL: https://issues.apache.org/jira/browse/PHOENIX-4827
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>
> Once both Tephra and Omid both use Table instead of HTableInterface, the TAL 
> methods should be updated as well. Support for this in Tephra was included in 
> the recent 0.15.0 release and for Omid in OMID-107.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)