[jira] [Created] (PHOENIX-7351) unnecessary ERROR logging in RenewLeaseTask

2024-07-04 Thread Jan Van Besien (Jira)
Jan Van Besien created PHOENIX-7351:
---

 Summary: unnecessary ERROR logging in RenewLeaseTask
 Key: PHOENIX-7351
 URL: https://issues.apache.org/jira/browse/PHOENIX-7351
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0
Reporter: Jan Van Besien


In `ConnectionQueryServicesImpl.RenewLeaseTask`, there is ERROR logging when 
the thread is interrupted saying "Thread interrupted when renewing lease.". 
Interrruption is expected due to calling `renewLeaseExecutor.shutdownNow()` 
from within `ConnectionQueryServicesImpl#close`. In other words, this should 
not be logged as ERROR.

We have various client-side processes (cli tools that run for a while and then 
exit) where this error is often observed, causing users to think that there is 
something wrong. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-2680) stats table timestamp incorrectly used as table timestamp

2016-02-12 Thread Jan Van Besien (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144288#comment-15144288
 ] 

Jan Van Besien commented on PHOENIX-2680:
-

[~jamestaylor] - thanks for the suggestions. We are relying heavily on client 
controlled timestamps. Dropping the table at the latest timestamp would break 
some use cases (e.g. we want to be able to drop at timestamp x and recreate at 
y whereby y > x but both x and y are in the past). Disabling statistics is also 
not something I really want to do, so I focussed on your first suggestion.

The phoenix.stats.useCurrentTime setting seems to be what I need, and mostly 
works as expected. However:

* it only seems to be used for background statistics updates (during 
compaction). When issuing an UPDATE STATISTICS statement, it seems that the 
client timestamp is used for data written in the STATS table. I still think it 
is a bug that in such a situation, a client gets a 
NewerTableAlreadyExistsException while actually there is no newer table, but 
only the STATS table contains newer data about the table.
* it seems that the metadatacache is not automatically updated when statistics 
are updated. Hence there are situations where my drop table might or might not 
work, depending on whether the metadatacache is up to date or not.

> stats table timestamp incorrectly used as table timestamp
> -
>
> Key: PHOENIX-2680
> URL: https://issues.apache.org/jira/browse/PHOENIX-2680
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>    Reporter: Jan Van Besien
> Attachments: PHOENIX-2680.patch
>
>
> I think there is a problem introduced by PHOENIX-1390 related to table 
> timestamps.
> We run into a situation where we are unable to drop a table due to a 
> NewerTableAlreadyExistsException. This table was created at a certain 
> timestamp in the past (say 2 years ago) and we try to drop it at a more 
> recent timestamp in the past (say 1 year ago).
> During the drop, the client timestamp (1 year ago) is compared with the table 
> timestamp. The table timestamp should be 2 years ago, but due to this 
> statement on line 856 in MetaDataEndpointImpl:
> {code}
> timeStamp = Math.max(timeStamp, stats.getTimestamp())
> {code}
> the timestamp of the table is set to the timestamp of the stats table, which 
> happens to be something much more recent.
> I think this is wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2680) stats table timestamp incorrectly used as table timestamp

2016-02-11 Thread Jan Van Besien (JIRA)
Jan Van Besien created PHOENIX-2680:
---

 Summary: stats table timestamp incorrectly used as table timestamp
 Key: PHOENIX-2680
 URL: https://issues.apache.org/jira/browse/PHOENIX-2680
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
Reporter: Jan Van Besien


I think there is a problem introduced by PHOENIX-1390 related to table 
timestamps.

We run into a situation where we are unable to drop a table due to a 
NewerTableAlreadyExistsException. This table was created at a certain timestamp 
in the past (say 2 years ago) and we try to drop it at a more recent timestamp 
in the past (say 1 year ago).

During the drop, the client timestamp (1 year ago) is compared with the table 
timestamp. The table timestamp should be 2 years ago, but due to this statement 
on line 856 in MetaDataEndpointImpl:

{code}
timeStamp = Math.max(timeStamp, stats.getTimestamp())
{code}

the timestamp of the table is set to the timestamp of the stats table, which 
happens to be something much more recent.

I think this is wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2680) stats table timestamp incorrectly used as table timestamp

2016-02-11 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-2680:

Attachment: PHOENIX-2680.patch

Patch against master. Essentially I just removed the line which I think is 
wrong (and added an integration test), I don't know if I might have broken 
something else which depended on the table timestamp to be set to the timestamp 
from the stats table?

> stats table timestamp incorrectly used as table timestamp
> -
>
> Key: PHOENIX-2680
> URL: https://issues.apache.org/jira/browse/PHOENIX-2680
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>        Reporter: Jan Van Besien
> Attachments: PHOENIX-2680.patch
>
>
> I think there is a problem introduced by PHOENIX-1390 related to table 
> timestamps.
> We run into a situation where we are unable to drop a table due to a 
> NewerTableAlreadyExistsException. This table was created at a certain 
> timestamp in the past (say 2 years ago) and we try to drop it at a more 
> recent timestamp in the past (say 1 year ago).
> During the drop, the client timestamp (1 year ago) is compared with the table 
> timestamp. The table timestamp should be 2 years ago, but due to this 
> statement on line 856 in MetaDataEndpointImpl:
> {code}
> timeStamp = Math.max(timeStamp, stats.getTimestamp())
> {code}
> the timestamp of the table is set to the timestamp of the stats table, which 
> happens to be something much more recent.
> I think this is wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2664) Upgrade from 4.1.0 to 4.6.0 fails

2016-02-08 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-2664:

Attachment: PHOENIX-2664.patch

Patch against master.

Apart from the UpgradeIT I couldn't find any tests related to this upgrade 
scenario. Is UpgradeIT the place to add one?

> Upgrade from 4.1.0 to 4.6.0 fails
> -
>
> Key: PHOENIX-2664
> URL: https://issues.apache.org/jira/browse/PHOENIX-2664
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Jan Van Besien
> Attachments: PHOENIX-2664.patch
>
>
> Upgrade from 4.1.0 to 4.6.0 fails with the following exception.
> {code}
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Add column failed due 
> to:org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> Error: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 49. 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.j
> ava:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1416)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:1866)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnsIfNotExists(ConnectionQueryServicesImpl.java:1892)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:179)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1978)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:804)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: NoViableAltException(28@[])
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_name(PhoenixSQLParser.java:2397)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3707)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3631)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:3337)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 27 more
> {code}
> Looking at the code in ConnectionQueryServicesImpl#init, it seems there are 
> multiple places where string concatenation on the columnsToAdd string happens 
> without checking what the current content of that string is, resulting in a 
> string that starts with a comma.
> I should add that our 4.1.0 version actually has some custom patches, but the 
> code seems to suggest it will fail with a vanilla 4.1.0 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2664) Upgrade from 4.1.0 to 4.6.0 fails

2016-02-08 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-2664:

Affects Version/s: 4.6.0

> Upgrade from 4.1.0 to 4.6.0 fails
> -
>
> Key: PHOENIX-2664
> URL: https://issues.apache.org/jira/browse/PHOENIX-2664
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>        Reporter: Jan Van Besien
> Attachments: PHOENIX-2664.patch
>
>
> Upgrade from 4.1.0 to 4.6.0 fails with the following exception.
> {code}
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Add column failed due 
> to:org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> Error: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 49. 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.j
> ava:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1416)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:1866)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnsIfNotExists(ConnectionQueryServicesImpl.java:1892)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:179)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1978)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:804)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: NoViableAltException(28@[])
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_name(PhoenixSQLParser.java:2397)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3707)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3631)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:3337)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 27 more
> {code}
> Looking at the code in ConnectionQueryServicesImpl#init, it seems there are 
> multiple places where string concatenation on the columnsToAdd string happens 
> without checking what the current content of that string is, resulting in a 
> string that starts with a comma.
> I should add that our 4.1.0 version actually has some custom patches, but the 
> code seems to suggest it will fail with a vanilla 4.1.0 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2664) Upgrade from 4.1.0 to 4.6.0 fails

2016-02-08 Thread Jan Van Besien (JIRA)
Jan Van Besien created PHOENIX-2664:
---

 Summary: Upgrade from 4.1.0 to 4.6.0 fails
 Key: PHOENIX-2664
 URL: https://issues.apache.org/jira/browse/PHOENIX-2664
 Project: Phoenix
  Issue Type: Bug
Reporter: Jan Van Besien


Upgrade from 4.1.0 to 4.6.0 fails with the following exception.

{code}
org.apache.phoenix.query.ConnectionQueryServicesImpl: Add column failed due 
to:org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
Syntax error. Encountered "," at line 1, column 49.
Error: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 49. 
(state=42P00,code=601)
org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax 
error. Encountered "," at line 1, column 49.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.j
ava:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1416)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:1866)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnsIfNotExists(ConnectionQueryServicesImpl.java:1892)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:179)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1978)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: NoViableAltException(28@[])
at 
org.apache.phoenix.parse.PhoenixSQLParser.column_name(PhoenixSQLParser.java:2397)
at 
org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3707)
at 
org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3631)
at 
org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:3337)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 27 more
{code}

Looking at the code in ConnectionQueryServicesImpl#init, it seems there are 
multiple places where string concatenation on the columnsToAdd string happens 
without checking what the current content of that string is, resulting in a 
string that starts with a comma.

I should add that our 4.1.0 version actually has some custom patches, but the 
code seems to suggest it will fail with a vanilla 4.1.0 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


phoenix query server connection properties

2015-10-01 Thread Jan Van Besien
Hi,

We are working on a calcite/avatica based "thin" JDBC driver very
similar to what Phoenix has done with for its QueryServer, and I am
looking for some feedback/options.

Avatica in its current state doesn't have an RPC call for "create
connection". As a consequence, connection properties (i.e. the
Properties instance passed through the
DriverManager.getConnection(url, props)) are currently not RPC-ed from
the client to the server.

For Phoenix, this means properties such as TenantId, CurrentSCN etc do
not work with the thin driver. I saw this question being asked in
PHOENIX-1824, so I am not sure whether you were aware of this problem.
I've tested it with the phoenix sandbox on master with a multi-tenant
table to be sure.

There currently is a discussion ongoing on the calcite dev mailing
list on this topic as well (with subject "avatica jdbc URL connection
properties").

Our understanding of the problem is that we need to extend the RPC
with a "create connection", but this doesn't seem to be
straightforward in the current Avatica design.

It would be interesting to hear your thoughts on this subject.

Thanks
Jan


[jira] [Updated] (PHOENIX-1587) Error deserializing empty array (which represents null)

2015-01-15 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-1587:

Attachment: PHOENIX-1587.patch

patch for the issue on the master branch.

 Error deserializing empty array (which represents null)
 ---

 Key: PHOENIX-1587
 URL: https://issues.apache.org/jira/browse/PHOENIX-1587
 Project: Phoenix
  Issue Type: Bug
Reporter: Jan Van Besien
Assignee: Jan Van Besien
 Attachments: PHOENIX-1587.patch


 Serializing null arrays results in an empty byte[]. The deserialization logic 
 in PArrayDataType#createPhoenixArray has a check to return null if the 
 provided byte[] is empty, however it checks the length of the byte[] rather 
 than the provided length parameter. Note that the provided byte[] is 
 typically the whole underlying row; it has to be interpreted only from offset 
 to length.
 Therefor, the check has to be replaced with a check on length rather than on 
 bytes.length.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1587) Error deserializing empty array (which represents null)

2015-01-15 Thread Jan Van Besien (JIRA)
Jan Van Besien created PHOENIX-1587:
---

 Summary: Error deserializing empty array (which represents null)
 Key: PHOENIX-1587
 URL: https://issues.apache.org/jira/browse/PHOENIX-1587
 Project: Phoenix
  Issue Type: Bug
Reporter: Jan Van Besien
Assignee: Jan Van Besien


Serializing null arrays results in an empty byte[]. The deserialization logic 
in PArrayDataType#createPhoenixArray has a check to return null if the provided 
byte[] is empty, however it checks the length of the byte[] rather than the 
provided length parameter. Note that the provided byte[] is typically the whole 
underlying row; it has to be interpreted only from offset to length.

Therefor, the check has to be replaced with a check on length rather than on 
bytes.length.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Potential bug with NULL array

2015-01-15 Thread Jan Van Besien
I created PHOENIX-1587 and added a patch.

On Thu, Jan 15, 2015 at 10:39 AM, Jan Van Besien janvanbes...@gmail.com wrote:
 Hi,

 I think there is a bug in PArrayDataType, in the context of store nulls.

 The first line of the createPhoenixArray method tackles the case where
 an empty byte array is interpreted as null. However, I think the check
 should be

 if (bytes == null || length == 0)

 rather than

 if (bytes == null || bytes.length == 0)

 Now the method continues when the bytes.length != 0 but length == 0,
 resulting in failures further on (invalid noOfElements being
 calcluated or IllegalArgumentException in Buffer.position())

 Can anyone confirm this is a bug, or am I missing something?

 thanks
 Jan


Potential bug with NULL array

2015-01-15 Thread Jan Van Besien
Hi,

I think there is a bug in PArrayDataType, in the context of store nulls.

The first line of the createPhoenixArray method tackles the case where
an empty byte array is interpreted as null. However, I think the check
should be

if (bytes == null || length == 0)

rather than

if (bytes == null || bytes.length == 0)

Now the method continues when the bytes.length != 0 but length == 0,
resulting in failures further on (invalid noOfElements being
calcluated or IllegalArgumentException in Buffer.position())

Can anyone confirm this is a bug, or am I missing something?

thanks
Jan


[jira] [Updated] (PHOENIX-1214) SYSTEM.CATALOG cannot be created when first connection to cluster is tenant-specific

2014-10-15 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-1214:

Attachment: 0001-PHOENIX-1214-support-tenant-specific-initial-connect.patch

I looked into this issue in some more detail, and found that it is actually 
almost a duplicate of PHOENIX-831.

There are two ways to create a tenant specific connection. Either you provide 
the TenantId via the Properties, either you provide the TenantId in the URL. 
PHOENIX-831 fixes the problem when the TenantId is in the Properties by 
removing it just before creating the meta connection which is used to create 
the meta tables such as SYSTEM.CATALOG. However, if the TenantId is part of the 
URL, it remains in the URL when creating the meta connection.

Attached is a patch which fixes the issue similarly as the fix in PHOENIX-831: 
remove the TenantId from the URL for the meta connection.

 SYSTEM.CATALOG cannot be created when first connection to cluster is 
 tenant-specific
 

 Key: PHOENIX-1214
 URL: https://issues.apache.org/jira/browse/PHOENIX-1214
 Project: Phoenix
  Issue Type: Bug
Reporter: Eli Levine
Assignee: Eli Levine
 Attachments: 
 0001-PHOENIX-1214-support-tenant-specific-initial-connect.patch, 
 csvloader_multitenancy_test.patch


 Reported by [~janvanbes...@ngdata.com]:
 The problem seems to be that it is impossible to create a tenant
 specific connection if the same driver instance hasn't previously been
 used to create a global connection.
 To reproduce:
 - connect to a running hbase with a non-tenant specific connection
 (using sqlline or squirrel or whatever you want)
 - create a multitenant table
 - close the connection and make a new tenant specific connection from
 within a new JVM (otherwise the driver instance is reused). When using
 squirrel, this implies restarting the app
 This fails with
 {code}
 Caused by: java.sql.SQLException: ERROR 1030 (42Y89): Cannot create
 table for tenant-specific connection tableName=SYSTEM.CATALOG
 at 
 org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:309)
 at 
 org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
 at 
 org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:873)
 at 
 org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:422)
 at 
 org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:246)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:236)
 at 
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:235)
 at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:935)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1462)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1428)
 at 
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
 at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1428)
 at 
 org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:131)
 at 
 org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:112)
 at 
 net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
 at 
 net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
 ... 7 more
 {code}
 When reusing the driver instance, this works. I think it has to do
 with the logic around the check for initialized in
 ConnectionQueryServicesImpl but I didn't dig any further.
 thanks
 Jan



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1204) Unclear multi tenancy docs

2014-08-26 Thread Jan Van Besien (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14110395#comment-14110395
 ] 

Jan Van Besien commented on PHOENIX-1204:
-

yes, this is certainly an improvement because the document doesn't start 
talking about views straight away.

However in my head (let me know if I got this wrong) there are two equal ways 
to use this:
- the no views option: create tenant specific connections and work with the 
underlying table through these tenant specific connections
- the views option: create views and work through the views

The documentation still talks about the views as being *the* way to do it. 
Maybe it should be changed such that it says if next to data isolation you 
also want feature x and y, you can create tenant specific views whereby 
feature x and y are extra capabilities you get once you use tenant specific 
views over the no views approach. I am not 100% sure myself what these 
features would be, probably schema isolation next to data isolation or 
something like that?

Also in the section about Tenant-specific tables it says:

A tenant specific connection may only query:
- their own schema, which is to say it only sees tenant-specific views that 
were created by that tenant.
non multi-tenant global tables, that is tables created with a regular 
connection without the MULTI_TENANT=TRUE declaration.


Is that actually correct? With a tenant specific connection is is perfectly 
possible to query multi-tenant tables without having to create tenant-specific 
views.


 Unclear multi tenancy docs
 --

 Key: PHOENIX-1204
 URL: https://issues.apache.org/jira/browse/PHOENIX-1204
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.0.0
Reporter: Jan Van Besien
Assignee: Eli Levine
 Attachments: mt.diff


 From the multi tenancy docs (http://phoenix.apache.org/multi-tenancy.html) I 
 had the impression that it is mandatory to create tenant specific views.
 The basic use case where you simply create a tenant specific connection on a 
 table where multi tenancy is enabled, and you transparently only see the data 
 specific to your tenant (leaving out the tenantid column), is missing in the 
 documentation.
 I think it would be useful to start the documentation with such an example 
 before talking about tenant specific views.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1204) Unclear multi tenancy docs

2014-08-26 Thread Jan Van Besien (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14111897#comment-14111897
 ] 

Jan Van Besien commented on PHOENIX-1204:
-

Perfect, no more confusion and both options well explained. Thanks!

 Unclear multi tenancy docs
 --

 Key: PHOENIX-1204
 URL: https://issues.apache.org/jira/browse/PHOENIX-1204
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.0.0
Reporter: Jan Van Besien
Assignee: Eli Levine
 Attachments: multi-tenancy.html


 From the multi tenancy docs (http://phoenix.apache.org/multi-tenancy.html) I 
 had the impression that it is mandatory to create tenant specific views.
 The basic use case where you simply create a tenant specific connection on a 
 table where multi tenancy is enabled, and you transparently only see the data 
 specific to your tenant (leaving out the tenantid column), is missing in the 
 documentation.
 I think it would be useful to start the documentation with such an example 
 before talking about tenant specific views.



--
This message was sent by Atlassian JIRA
(v6.2#6252)