[jira] [Updated] (PHOENIX-6898) Index tests fail with HBase 2.5

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6898:
-
Affects Version/s: 5.2.0
  Description: 
A lot of indexing tests fail with HBase 2.5.

We haven't had a successful 2.5 run on master with HBase 2.5 since last August.

The last successful run on 5.1 was on Jan 26.

[https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.1/]
[https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/]

  was:
The last successful run was on Jan 26.
We have the usual flakies for the other HBase profiles, but 5.1 is consistently 
failing ~100 tests.

https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.1/

  Summary: Index tests fail with HBase 2.5  (was: Lots of failures 
on 5.1 with HBase 2.5)

> Index tests fail with HBase 2.5
> ---
>
> Key: PHOENIX-6898
> URL: https://issues.apache.org/jira/browse/PHOENIX-6898
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Priority: Blocker
>
> A lot of indexing tests fail with HBase 2.5.
> We haven't had a successful 2.5 run on master with HBase 2.5 since last 
> August.
> The last successful run on 5.1 was on Jan 26.
> [https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.1/]
> [https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6898) Lots of failures on 5.1 with HBase 2.5

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6898:
-
Description: 
The last successful run was on Jan 26.
We have the usual flakies for the other HBase profiles, but 5.1 is consistently 
failing ~100 tests.

https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.1/

  was:
The last successful run was on Jan 26.
We have the usual flakies for the other HBase profiles, but 5.1 is consistently 
failing ~100 tests.


> Lots of failures on 5.1 with HBase 2.5
> --
>
> Key: PHOENIX-6898
> URL: https://issues.apache.org/jira/browse/PHOENIX-6898
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.4
>Reporter: Istvan Toth
>Priority: Blocker
>
> The last successful run was on Jan 26.
> We have the usual flakies for the other HBase profiles, but 5.1 is 
> consistently failing ~100 tests.
> https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.1/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6898) Lots of failures on 5.1 with HBase 2.5

2023-03-07 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6898:


 Summary: Lots of failures on 5.1 with HBase 2.5
 Key: PHOENIX-6898
 URL: https://issues.apache.org/jira/browse/PHOENIX-6898
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.4
Reporter: Istvan Toth


The last successful run was on Jan 26.
We have the usual flakies for the other HBase profiles, but 5.1 is consistently 
failing ~100 tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Timezone handling redux

2023-03-07 Thread Istvan Toth
I have committed the fix both to master and 5.1 (as it needs to be
explicitly enabled)

Reviews and comments are still welcome.

best regards
Istvan

On Fri, Mar 3, 2023 at 7:46 PM Istvan Toth  wrote:

> Hi!
>
> I got one review from Richard on Github - Thank you!
>
> However, even though this is a small (in line count) change, and is off by
> default, it has very high visibility and fundamentally changes the
> date/time semantics of the JDBC interface (even for things like query
> browsers / sqlline) so I'd much prefer to get some eyes on it from
> the SFDC side before commiting.
>
> The actual fix is in the *DateUtil#Apply*Displacement* methods, which are
> called from *PhoenixResultSet* and *PhoenixPreparedStatements*,
> and from the* CURRENT_DATE/TIME()* functions. The rest is tests, and some
> (really unrelated) refactoring in PhoenixStatement to avoid re-parsing the
> date configuration parameters.
>
> best regards
> Istvan
>
> On Mon, Feb 27, 2023 at 2:08 PM Istvan Toth  wrote:
>
>> Hi!
>>
>> We already have an old rambling thread on fixing Phoenix time zone
>> handling, but I am starting another one.
>> Please if you have interest in fixing date handling then take the time to
>> read it, I promise it is much less messy than the old approach.
>>
>>
>> *Background:*
>> The SQL standard defines two groups of Date types.
>> One is the WITHOUT TIME ZONE types, which are analogous to the
>> java.time.Local* types, the other is the WITH TIME ZONE types, which are
>> analogous to java.time.Instant.
>> Types without qualifiers are interpreted as WITHOUT TIME ZONE types.
>>
>>
>> *Current state of Phoenix*
>> Phoenix follows this in most regards, as it stores temporal types as UTC
>> epoch values, interprets string literals as UTC, and formats them as UTC.
>> Phoenix date functions also operate in UTC, which is consistent with the
>> parse/format code. This results in the behaviour required for WITHOUT TIME
>> ZONE types.
>> However, this breaks down when we begin to use the java.sql. temporal
>> types, where Phoenix expects and returns the raw UTC epoch timestamp values.
>>
>> a brief illustration, for selects, but the situation is the same for
>> upsert with PreparedStatment.setTimestamp() and friends:
>>
>>
>> *INSERT INTO X  (DATE) VALUES ('2000-01-01 10:10:10') *stores  the UTC
>> epoch for '2000-01-01 10:10:10', regardless of the client timezone.
>>
>>
>> *SELECT * from DATE; rs.getString("DATE") *returns '2000-01-01 10:10:10'
>> , regardless of the client timezone.
>>
>> So far, so good.
>>
>>
>> *SELECT * from DATE; rs.getTimestamp("DATE").toString()*returns the UTC
>> epoch timestamp interpreted in the local timezone, i.e a different string
>> for every TZ.
>>
>> This is the root of all of our problems.
>>
>> This problem is built into the ancient JDBC API, which has no real
>> concept of WITH TIMEZONE and WITHOUT TIMEZONE types.
>>
>> *My previous attempt*
>>
>> Previously, I have changed Phoenix to treat all Temporal data as WITH
>> TIMEZONE types, which is not wrong, but
>> 1.) Goes against the SQL standard with defines unqualified temporal types
>> as WITHOUT timezone
>> 2.) Goes against several assumptions built into the Phoenix codebase, and
>> requires pushing the client TZ to the server side, and
>> - either using ThreadLocals per connection to store this information,
>> - or adding the TZ as a parameter to many/most methods in the parser and
>> compiler code
>> - or some even deeper refactoring, that I haven't figured out yet.
>>
>> Implementing real WITH TIMEZONE types in Phoenix in the future is not off
>> the table, but is a much more involved task.
>> (As evidenced by the time I have already wasted on it)
>>
>> *The new proposal*
>>
>> Instead of re-imagining Phoenix to treat all types as WITH TIMEZONE
>> types, we can keep the current interpretation as WITHOUT TIMEZONE types,
>> and  fix the get / functions very easily.
>> The  code change required for this is only a few hundred lines (without
>> tests).
>>
>> The solution is as simple as applying the Timezone offset as a
>> displacement in PreparedStatement and Resultset when setting or returning
>> java.sql.Date/Time/Timestamp.
>>
>> This is what most DBs do, and also how the SQL standard defines
>> conversion between WITH TIME ZONE and WITHOUT TIME ZONE types. In this
>> case, the java.sql epoch based types transiting the JDBC interface are
>> treated as WITH TIMEZONE types.
>>
>> This ensures that non timezone-aware clients get timestamps that, if
>> interpreted in their local timezone, resolve to the same timezone-less date
>> that the parser and formatter would return, and any java.sql.temporal types
>> set in PreparedStatement will be interpreted as the local time in the
>> client TZ.
>>
>> The only other change in functionality is the CURRENT_DATE() and
>> CURRENT_TIME() functions, which now apply the displacement to the generated
>> values.
>>
>> As usual, we hide this change behind a property, and default to the old
>> 

[jira] [Updated] (PHOENIX-6897) Aggregate on unverified index rows return wrong result

2023-03-07 Thread Yunbo Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunbo Fan updated PHOENIX-6897:
---
Description: 
h4. Summary:
Upsert include three phases, and if failed after phase1, unverified index rows 
will leave in the index table. This will cause wrong result when do aggregate 
queries.
h4. Steps for reproduce
1. create table and index
{code}
create table students(id integer primary key, name varchar, status integer);
create index students_name_index on students(name, id) include (status);
{code}
2. upsert data using phoenix
{code}
upsert into students values(1, 'tom', 1);
upsert into students values(2, 'jerry', 2);
{code}
3. do phase1 by hbase shell, change status column value to '2' and verified 
column value to '2'
{code}
put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:0:STATUS', 
"\x80\x00\x00\x02"
put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:_0', "\x02"
{code}
notice: hbase shell can't parse colon in column, like '0:0:STATUS', you may 
need comment the line in hbase/lib/ruby/hbase/table.rb, see 
https://issues.apache.org/jira/browse/HBASE-13788
{code}
# Returns family and (when has it) qualifier for a column name
def parse_column_name(column)
  split = org.apache.hadoop.hbase.KeyValue.parseColumn(column.to_java_bytes)
  -> comment this line out #set_converter(split) if split.length > 1
  return split[0], (split.length > 1) ? split[1] : nil
end
{code}
4. do query without aggregate, the result is right
{code}
0: jdbc:phoenix:> select status from students where name = 'tom';
++
| STATUS |
++
| 1  |
++
{code}
5. do query with aggregate, get wrong result
{code}
0: jdbc:phoenix:> select count(*) from students where name = 'tom' and status = 
1;
+--+
| COUNT(1) |
+--+
| 0|
+--+
{code}
6. using NO_INDEX hint
{code}
0: jdbc:phoenix:> select /*+ NO_INDEX */ count(*) from students where name = 
'tom' and status = 1;
+--+
| COUNT(1) |
+--+
| 1|
+--+
{code}

  was:
h4. Summary:
Upsert include three phases, and if failed after phase1, unverified index rows 
will leave in the index table. This will cause wrong result when do aggregate 
queries.
h4. Steps for reproduce
1. create table and index


> Aggregate on unverified index rows return wrong result
> --
>
> Key: PHOENIX-6897
> URL: https://issues.apache.org/jira/browse/PHOENIX-6897
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Yunbo Fan
>Priority: Major
>
> h4. Summary:
> Upsert include three phases, and if failed after phase1, unverified index 
> rows will leave in the index table. This will cause wrong result when do 
> aggregate queries.
> h4. Steps for reproduce
> 1. create table and index
> {code}
> create table students(id integer primary key, name varchar, status integer);
> create index students_name_index on students(name, id) include (status);
> {code}
> 2. upsert data using phoenix
> {code}
> upsert into students values(1, 'tom', 1);
> upsert into students values(2, 'jerry', 2);
> {code}
> 3. do phase1 by hbase shell, change status column value to '2' and verified 
> column value to '2'
> {code}
> put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:0:STATUS', 
> "\x80\x00\x00\x02"
> put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:_0', "\x02"
> {code}
> notice: hbase shell can't parse colon in column, like '0:0:STATUS', you may 
> need comment the line in hbase/lib/ruby/hbase/table.rb, see 
> https://issues.apache.org/jira/browse/HBASE-13788
> {code}
> # Returns family and (when has it) qualifier for a column name
> def parse_column_name(column)
>   split = 
> org.apache.hadoop.hbase.KeyValue.parseColumn(column.to_java_bytes)
>   -> comment this line out #set_converter(split) if split.length > 1
>   return split[0], (split.length > 1) ? split[1] : nil
> end
> {code}
> 4. do query without aggregate, the result is right
> {code}
> 0: jdbc:phoenix:> select status from students where name = 'tom';
> ++
> | STATUS |
> ++
> | 1  |
> ++
> {code}
> 5. do query with aggregate, get wrong result
> {code}
> 0: jdbc:phoenix:> select count(*) from students where name = 'tom' and status 
> = 1;
> +--+
> | COUNT(1) |
> +--+
> | 0|
> +--+
> {code}
> 6. using NO_INDEX hint
> {code}
> 0: jdbc:phoenix:> select /*+ NO_INDEX */ count(*) from students where name = 
> 'tom' and status = 1;
> +--+
> | COUNT(1) |
> +--+
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6897) Aggregate on unverified index rows return wrong result

2023-03-07 Thread Yunbo Fan (Jira)
Yunbo Fan created PHOENIX-6897:
--

 Summary: Aggregate on unverified index rows return wrong result
 Key: PHOENIX-6897
 URL: https://issues.apache.org/jira/browse/PHOENIX-6897
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.2
Reporter: Yunbo Fan


h4. Summary:
Upsert include three phases, and if failed after phase1, unverified index rows 
will leave in the index table. This will cause wrong result when do aggregate 
queries.
h4. Steps for reproduce
1. create table and index



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6892) Add support for SqlAlchemy 2.0

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6892:
-
Summary: Add support for SqlAlchemy 2.0  (was: Python Phoenixdb SqlAlchemy 
tests fail with SqlAlchemy 2.0)

> Add support for SqlAlchemy 2.0
> --
>
> Key: PHOENIX-6892
> URL: https://issues.apache.org/jira/browse/PHOENIX-6892
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Affects Versions: python-phoenixdb-1.2.1
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Let's hope we can fix this without breaking SqlAchemy 1.x support
> {noformat}
>   
> /home/stoty/workspaces/apache-phoenix/phoenix-queryserver-parent/python-phoenixdb/phoenixdb/tests/test_sqlalchemy.py:161:
>  SADeprecationWarning: The dbapi() classmethod on dialect classes has been 
> renamed to import_dbapi().  Implement an import_dbapi() classmethod directly 
> on class  to remove this 
> warning; the old .dbapi() classmethod may be maintained for backwards 
> compatibility.
>     return db.create_engine(urlunparse(url_parts), tls=tls, 
> connect_args=connect_args)-- Docs: 
> https://docs.pytest.org/en/stable/how-to/capture-warnings.html
> 
>  short test summary info 
> 
> FAILED phoenixdb/tests/test_sqlalchemy.py::SQLAlchemyTest::test_connection - 
> TypeError: Additional arguments should be named _, got 
> 'autoload'
> FAILED phoenixdb/tests/test_sqlalchemy.py::SQLAlchemyTest::test_reflection - 
> sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: 'drop 
> table if exists us_population'
> FAILED 
> phoenixdb/tests/test_sqlalchemy.py::SQLAlchemyTest::test_schema_filtering - 
> sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: 'drop view 
> if exists ALCHEMY_TEST_VIEW'
> FAILED phoenixdb/tests/test_sqlalchemy.py::SQLAlchemyTest::test_textual - 
> sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: 'drop 
> table if exists ALCHEMY_TEST'
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (OMID-239) OMID TLS support

2023-03-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/OMID-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved OMID-239.

Fix Version/s: 1.1.1
   Resolution: Fixed

> OMID TLS support
> 
>
> Key: OMID-239
> URL: https://issues.apache.org/jira/browse/OMID-239
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-239) OMID TLS support

2023-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697369#comment-17697369
 ] 

ASF GitHub Bot commented on OMID-239:
-

richardantal merged PR #129:
URL: https://github.com/apache/phoenix-omid/pull/129




> OMID TLS support
> 
>
> Key: OMID-239
> URL: https://issues.apache.org/jira/browse/OMID-239
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-239) OMID TLS support

2023-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697368#comment-17697368
 ] 

ASF GitHub Bot commented on OMID-239:
-

richardantal commented on PR #129:
URL: https://github.com/apache/phoenix-omid/pull/129#issuecomment-1457982560

   Thanks @stoty for the review!




> OMID TLS support
> 
>
> Key: OMID-239
> URL: https://issues.apache.org/jira/browse/OMID-239
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-4906) Introduce a coprocessor to handle cases where we can block merge for regions of salted table when it is problemetic

2023-03-07 Thread Aman Poonia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-4906:
-
Summary: Introduce a coprocessor to handle cases where we can block merge 
for regions of salted table when it is problemetic  (was: Abnormal query result 
due to merging regions of a salted table)

> Introduce a coprocessor to handle cases where we can block merge for regions 
> of salted table when it is problemetic
> ---
>
> Key: PHOENIX-4906
> URL: https://issues.apache.org/jira/browse/PHOENIX-4906
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.14.0
>Reporter: JeongMin Ju
>Assignee: Aman Poonia
>Priority: Critical
> Attachments: SaltingWithRegionMergeIT.java, 
> ScanRanges_intersectScan.png, TestSaltingWithRegionMerge.java, 
> initial_salting_region.png, merged-region.png
>
>
> For a salted table, when a query is made for an entire data target, a 
> different plan is created depending on the type of the query, and as a 
> result, erroneous data is retrieved as a result.
> {code:java}
> // Actually, the schema of the table I used is different, but please ignore 
> it.
> create table if not exists test.test_tale (
>   rk1 varchar not null,
>   rk2 varchar not null,
>   column1 varchar
>   constraint pk primary key (rk1, rk2)
> )
> ...
> SALT_BUCKETS=16...
> ;
> {code}
>  
> I created a table with 16 salting regions and then wrote a lot of data.
>  HBase automatically split the region and I did the merging regions for data 
> balancing between the region servers.
> Then, when run the query, you can see that another plan is created according 
> to the Where clause.
>  * query1
>  select count\(*) from test.test_table;
> {code:java}
> +---+-++
> |PLAN 
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1851-CHUNK 5005959292 ROWS 1944546675532 BYTES PARALLEL 11-WAY FULL 
> SCAN OVER TEST:TEST_TABLE  | 1944546675532   | 5005959292 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 1944546675532   | 5005959292 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1944546675532   | 5005959292 |
> +---+-++
> {code}
>  * query2
>  select count\(*) from test.test_table where rk2 = 'aa';
> {code}
> +---+-++
> |  PLAN   
>   | EST_BYTES_READ  | EST_ROWS_READ  |
> +---+-++
> | CLIENT 1846-CHUNK 4992196444 ROWS 1939177965768 BYTES PARALLEL 11-WAY RANGE 
> SCAN OVER TEST:TEST_TABLE [0] - [15]  | 1939177965768   | 4992196444 |
> | SERVER FILTER BY FIRST KEY ONLY AND RK2 = 'aa'  
>   | 1939177965768   | 4992196444 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 1939177965768   | 4992196444 |
> +---+-++
> {code}
> Since rk2 used in the where clause of query2 is the second column of the PK, 
> it must be a full scan query like query1.
> However, as you can see, query2 is created by range scan and the generated 
> chunk is also less than five compared to query1.
> I added the log and printed out the startkey and endkey of the scan object 
> generated by the plan.
> And I found 5 chunks missing by query2.
> All five missing chunks were found in regions where the originally generated 
> region boundary value was not maintained through the merge operation.
> !initial_salting_region.png!
> After merging regions
> !merged-region.png!
> The code that caused the problem is this part.
>  When a select query is executed, the 
> 

[jira] [Created] (PHOENIX-6896) Document Phoenix Date/Time quirks

2023-03-07 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6896:


 Summary: Document Phoenix Date/Time quirks
 Key: PHOENIX-6896
 URL: https://issues.apache.org/jira/browse/PHOENIX-6896
 Project: Phoenix
  Issue Type: Wish
Reporter: Istvan Toth


Date/Time handling in Phoenix is non-standard.
Document its quirks, both for the 5.1.3 statem, and the improvements tracked in 
PHOENIX-6882.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6896) Document Phoenix Date/Time quirks

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6896:
-
Issue Type: Task  (was: Wish)

> Document Phoenix Date/Time quirks
> -
>
> Key: PHOENIX-6896
> URL: https://issues.apache.org/jira/browse/PHOENIX-6896
> Project: Phoenix
>  Issue Type: Task
>Reporter: Istvan Toth
>Priority: Major
>
> Date/Time handling in Phoenix is non-standard.
> Document its quirks, both for the 5.1.3 statem, and the improvements tracked 
> in PHOENIX-6882.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6896) Document Phoenix Date/Time quirks

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-6896:


Assignee: Istvan Toth

> Document Phoenix Date/Time quirks
> -
>
> Key: PHOENIX-6896
> URL: https://issues.apache.org/jira/browse/PHOENIX-6896
> Project: Phoenix
>  Issue Type: Task
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Date/Time handling in Phoenix is non-standard.
> Document its quirks, both for the 5.1.3 statem, and the improvements tracked 
> in PHOENIX-6882.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6895) Consider implementing WITH TIMEZONE temporal types

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6895:
-
Description: 
Phoenix currently treats temporal types as WITHOUT TIMEZONE (LocalDate - like) 
types.

We could implement WITH TIMEZONE types, that behave like Instants.

See the discussion in PHOENIX-5066, and the abandoned PR for PHOENIX-5066 on 
how this could work.

[https://github.com/apache/phoenix/pull/1504]

  was:
Phoenix currently treats temporal types as WITHOUT TIMEZONE (LocalDate - like) 
types.

We could implement WITH TIMEZONE types, that behave like Instants.

See the discussion in PHOENIX-5066, and the abandoned PRs for PHOENIX-5066 on 
how this could work.

[https://github.com/apache/phoenix/pull/1504]
https://github.com/apache/phoenix/pull/1567


> Consider implementing WITH TIMEZONE temporal types
> --
>
> Key: PHOENIX-6895
> URL: https://issues.apache.org/jira/browse/PHOENIX-6895
> Project: Phoenix
>  Issue Type: Wish
>  Components: core
>Reporter: Istvan Toth
>Priority: Minor
>
> Phoenix currently treats temporal types as WITHOUT TIMEZONE (LocalDate - 
> like) types.
> We could implement WITH TIMEZONE types, that behave like Instants.
> See the discussion in PHOENIX-5066, and the abandoned PR for PHOENIX-5066 on 
> how this could work.
> [https://github.com/apache/phoenix/pull/1504]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6895) Consider implementing WITH TIMEZONE temporal types

2023-03-07 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6895:
-
Description: 
Phoenix currently treats temporal types as WITHOUT TIMEZONE (LocalDate - like) 
types.

We could implement WITH TIMEZONE types, that behave like Instants.

See the discussion in PHOENIX-5066, and the abandoned PRs for PHOENIX-5066 on 
how this could work.

[https://github.com/apache/phoenix/pull/1504]
https://github.com/apache/phoenix/pull/1567

  was:
Phoenix currently treats temporal types as WITHOUT TIMEZONE (LocalDate - like) 
types.

We could implement WITH TIMEZONE types, that behave like Instants.


> Consider implementing WITH TIMEZONE temporal types
> --
>
> Key: PHOENIX-6895
> URL: https://issues.apache.org/jira/browse/PHOENIX-6895
> Project: Phoenix
>  Issue Type: Wish
>  Components: core
>Reporter: Istvan Toth
>Priority: Minor
>
> Phoenix currently treats temporal types as WITHOUT TIMEZONE (LocalDate - 
> like) types.
> We could implement WITH TIMEZONE types, that behave like Instants.
> See the discussion in PHOENIX-5066, and the abandoned PRs for PHOENIX-5066 on 
> how this could work.
> [https://github.com/apache/phoenix/pull/1504]
> https://github.com/apache/phoenix/pull/1567



--
This message was sent by Atlassian Jira
(v8.20.10#820010)