[jira] [Updated] (PHOENIX-5217) Incorrect result for COUNT DISTINCT limit
[ https://issues.apache.org/jira/browse/PHOENIX-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated PHOENIX-5217: -- Summary: Incorrect result for COUNT DISTINCT limit (was: Incorrect result for COUNT DISTINCT ... limit ...) > Incorrect result for COUNT DISTINCT limit > -- > > Key: PHOENIX-5217 > URL: https://issues.apache.org/jira/browse/PHOENIX-5217 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.14.1 > Environment: 4.14.1: incorrect > 4.6: correct. > >Reporter: Chen Feng >Assignee: chenglei >Priority: Critical > Fix For: 4.15.0, 5.1.0, 4.14.2 > > Attachments: PHOENIX-5217_v1-4.x-HBase-1.4.patch > > > For table t1(pk1, col1, CONSTRAINT(pk1)) > upsert into "t1" values (1, 1); > upsert into "t1" values (2, 2); > sql A: select count("pk1") from "t1" limit 1, return 2 [correct] > sql B: select count(disctinct("pk1")) from "t1" limit 1, return 1 [incorrect] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (PHOENIX-5188) IndexedKeyValue should populate KeyValue fields
[ https://issues.apache.org/jira/browse/PHOENIX-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas D'Silva updated PHOENIX-5188: Fix Version/s: 4.14.2 > IndexedKeyValue should populate KeyValue fields > --- > > Key: PHOENIX-5188 > URL: https://issues.apache.org/jira/browse/PHOENIX-5188 > Project: Phoenix > Issue Type: Bug >Affects Versions: 5.0.0, 4.14.1 >Reporter: Geoffrey Jacoby >Assignee: Geoffrey Jacoby >Priority: Major > Fix For: 4.15.0, 4.14.2, 5.1 > > Attachments: PHOENIX-5188-4.x-HBase-1.4..addendum.patch, > PHOENIX-5188-4.x-HBase-1.4.patch, PHOENIX-5188.patch > > > IndexedKeyValue subclasses the HBase KeyValue class, which has three primary > fields: bytes, offset, and length. These fields aren't populated by > IndexedKeyValue because it's concerned with index mutations, and has its own > fields that its own methods use. > However, KeyValue and its Cell interface have quite a few methods that assume > these fields are populated, and the HBase-level factory methods generally > ensure they're populated. Phoenix code should do the same, to maintain the > polymorphic contract. This is important in cases like custom > ReplicationEndpoints where HBase-level code may be iterating over WALEdits > that contain both KeyValues and IndexKeyValues and may need to interrogate > their contents. > Since the index mutation has a row key, this is straightforward. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging
[ https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas D'Silva updated PHOENIX-5172: Fix Version/s: 4.14.2 > Harden queryserver canary tool with retries and effective logging > - > > Key: PHOENIX-5172 > URL: https://issues.apache.org/jira/browse/PHOENIX-5172 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 4.13.1 >Reporter: Swaroopa Kadam >Assignee: Swaroopa Kadam >Priority: Minor > Fix For: 4.15.0, 5.1.0, 4.14.2 > > Attachments: phoenix-5172-4.x-1.3.patch, > phoenix-5172.4.x-HBase-1.3.v1.patch, phoenix-5172.4.x-HBase-1.3.v2.patch, > phoenix-5172.4.x-HBase-1.3.v3.patch, phoenix-5172.4.x-HBase-1.3.v4.patch > > Time Spent: 3h 40m > Remaining Estimate: 0h > > # Add retry logic in getting connection url > # Remove assigning schema_name to null > # Add more logging -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (PHOENIX-4832) Add Canary Test Tool for Phoenix Query Server
[ https://issues.apache.org/jira/browse/PHOENIX-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas D'Silva updated PHOENIX-4832: Fix Version/s: 4.14.1 > Add Canary Test Tool for Phoenix Query Server > - > > Key: PHOENIX-4832 > URL: https://issues.apache.org/jira/browse/PHOENIX-4832 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 5.0.0, 4.14.1 >Reporter: Ashutosh Parekh >Assignee: Swaroopa Kadam >Priority: Minor > Fix For: 4.15.0, 4.14.1, 5.1.0 > > Attachments: PHOENIX-4832-4.x-HBase-1.4.patch, > PHOENIX-4832-4.x-HBase-1.4.patch, PHOENIX-4832-master.patch, > PHOENIX-4832.patch > > > A suggested improvement is to add a Canary Test tool to the Phoenix Query > Server. It will execute a set of Basic Tests (CRUD) against a PQS end-point > and report on the proper functioning and testing results. A configurable Log > Sink can help to publish the results as required. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (PHOENIX-5018) Index mutations created by UPSERT SELECT will have wrong timestamps
[ https://issues.apache.org/jira/browse/PHOENIX-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas D'Silva updated PHOENIX-5018: Fix Version/s: 4.14.2 > Index mutations created by UPSERT SELECT will have wrong timestamps > --- > > Key: PHOENIX-5018 > URL: https://issues.apache.org/jira/browse/PHOENIX-5018 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.14.0, 5.0.0 >Reporter: Geoffrey Jacoby >Assignee: Kadir OZDEMIR >Priority: Major > Fix For: 4.15.0, 4.14.2, 5.1 > > Attachments: PHOENIX-5018.4.x-HBase-1.3.001.patch, > PHOENIX-5018.4.x-HBase-1.3.002.patch, PHOENIX-5018.4.x-HBase-1.4.001.patch, > PHOENIX-5018.4.x-HBase-1.4.002.patch, PHOENIX-5018.master.001.patch, > PHOENIX-5018.master.002.patch, PHOENIX-5018.master.003.patch, > PHOENIX-5018.master.004.patch > > Time Spent: 5h 40m > Remaining Estimate: 0h > > When doing a full rebuild (or initial async build) of a local or global index > using IndexTool and PhoenixIndexImportDirectMapper, or doing a synchronous > initial build of a global index using the index create DDL, we generate the > index mutations by using an UPSERT SELECT query from the base table to the > index. > The timestamps of the mutations use the default HBase behavior, which is to > take the current wall clock. However, the timestamp of an index KeyValue > should use the timestamp of the initial KeyValue in the base table. > Having base table and index timestamps out of sync can cause all sorts of > weird side effects, such as if the base table has data with an expired TTL > that isn't expired in the index yet. Also inserting old mutations with new > timestamps may overwrite the data that has been newly overwritten by the > regular data path during index build, which would lead to data loss and > inconsistency issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (PHOENIX-5173) LIKE and ILIKE statements return empty result list for search without wildcard
[ https://issues.apache.org/jira/browse/PHOENIX-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas D'Silva updated PHOENIX-5173: Fix Version/s: 4.14.2 5.1.0 4.15.0 > LIKE and ILIKE statements return empty result list for search without wildcard > -- > > Key: PHOENIX-5173 > URL: https://issues.apache.org/jira/browse/PHOENIX-5173 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.7.0 >Reporter: Emiliia Nesterovych >Assignee: Swaroopa Kadam >Priority: Blocker > Fix For: 4.15.0, 5.1.0, 4.14.2 > > Attachments: PHOENIX-5173.4.x-HBase-1.3.v1.patch > > Time Spent: 10m > Remaining Estimate: 0h > > I expect these two statements to return same result, as MySql does: > {code:java} > SELECT * FROM my_schema.user WHERE USER_NAME = 'Some Name'; > {code} > {code:java} > SELECT * FROM my_schema.user WHERE USER_NAME LIKE 'Some Name'; > {code} > But while there is data for these scripts, the statement with "LIKE" operator > returns empty result set. Same affects "ILIKE" operator. > Create table SQL is: > {code:java} > CREATE SCHEMA IF NOT EXISTS my_schema; > CREATE TABLE my_schema.user (USER_NAME VARCHAR(255), ID BIGINT NOT NULL > PRIMARY KEY);{code} > Fill up query: > {code:java} > UPSERT INTO my_schema.user VALUES('Some Name', 1);{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (PHOENIX-5249) Implicit multi-tenancy by naming convention
Geoffrey Jacoby created PHOENIX-5249: Summary: Implicit multi-tenancy by naming convention Key: PHOENIX-5249 URL: https://issues.apache.org/jira/browse/PHOENIX-5249 Project: Phoenix Issue Type: Improvement Reporter: Geoffrey Jacoby Many data models have a naming convention for the tenantId. It would be useful to define that convention with an optional Phoenix property in config, so if a tenant-specific connection is opened and a table is queried that has that tenantId column name in its PK, multi-tenant behavior automatically occurs even if MULTI_TENANT is not set in the table's metadata. (If MULTI_TENANT is explicitly false, multi-tenant behavior would not occur.) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (PHOENIX-5249) Implicit multi-tenancy by naming convention
[ https://issues.apache.org/jira/browse/PHOENIX-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Jacoby reassigned PHOENIX-5249: Assignee: Geoffrey Jacoby > Implicit multi-tenancy by naming convention > --- > > Key: PHOENIX-5249 > URL: https://issues.apache.org/jira/browse/PHOENIX-5249 > Project: Phoenix > Issue Type: Improvement >Reporter: Geoffrey Jacoby >Assignee: Geoffrey Jacoby >Priority: Major > > Many data models have a naming convention for the tenantId. It would be > useful to define that convention with an optional Phoenix property in config, > so if a tenant-specific connection is opened and a table is queried that has > that tenantId column name in its PK, multi-tenant behavior automatically > occurs even if MULTI_TENANT is not set in the table's metadata. (If > MULTI_TENANT is explicitly false, multi-tenant behavior would not occur.) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (PHOENIX-5248) Allow MULTI_TENANT to use any PK column
Geoffrey Jacoby created PHOENIX-5248: Summary: Allow MULTI_TENANT to use any PK column Key: PHOENIX-5248 URL: https://issues.apache.org/jira/browse/PHOENIX-5248 Project: Phoenix Issue Type: Improvement Reporter: Geoffrey Jacoby Assignee: Geoffrey Jacoby Phoenix's multi-tenancy support is incredibly useful, because it allows systems to give users connections that transparently filter a multi-tenant environment to return only their data. However, it's only supported for the leading column of the PK and has to be manually enabled per-table. One common use case I've encountered is a multi-tenant table whose keyspace is fully covered by disjoint views, with the views all filtering on enumerations of the same PK column -- let's call it ViewId. The most natural way to represent that is by a PK of which would allow fast lookups by tenant connections AND fast cross-tenant queries by admin processes. However, multi-tenancy requires the key be which is only fast for the tenant connections, not global processes. It would be great if I could set a property on the table, MULTI_TENANT_COLUMN = "TenantId", and column was used for auto-filtering instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
[ https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] jaanai resolved PHOENIX-5226. - Resolution: Fixed > The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell > - > > Key: PHOENIX-5226 > URL: https://issues.apache.org/jira/browse/PHOENIX-5226 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.15.0, 5.1.0 >Reporter: jaanai >Assignee: jaanai >Priority: Critical > Fix For: 4.15.0, 5.1.0 > > Attachments: PHOENIX-5226-master-v2.patch, > PHOENIX-5226-master-v3.patch, PHOENIX-5226-master.patch, Screen Shot > 2019-04-01 at 16.09.23.png, Screen Shot 2019-04-01 at 16.13.10.png > > > We use a tag of cell to indicat that some properties should not be derived > from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a > tag bytes, but the format is incorrect, the below is a reference from > KeyValue interface: > {quote}KeyValue can optionally contain Tags. When it contains tags, it is > added in the byte array after > * the value part. The format for this part is: > . > * tagslength maximum is Short.MAX_SIZE. The > tagsbytes > * contain one or more tags where as each tag is of the form > * . tagtype is one > byte > * and taglength maximum is Short.MAX_SIZE and it > includes 1 byte type > * length and actual tag bytes length.{quote} > The CATALOG table will be badly affected. Some errors will be caused when > reads CATALOG table. > > {code:java} > 0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error > -1 (0) : Error while executing SQL "drop view "test_2"": Remote driver > error: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729) > at > org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at > org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at > org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at > org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at > org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153) > at > org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198) > at > org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578) > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (PHOENIX-5247) DROP TABLE and DROP VIEW commands fail to drop second or higher level child views
[ https://issues.apache.org/jira/browse/PHOENIX-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kadir OZDEMIR updated PHOENIX-5247: --- Attachment: PHOENIX-5247.4.14-HBase-1.2.001.patch > DROP TABLE and DROP VIEW commands fail to drop second or higher level child > views > - > > Key: PHOENIX-5247 > URL: https://issues.apache.org/jira/browse/PHOENIX-5247 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.14.2 >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Fix For: 4.14.2 > > Attachments: PHOENIX-5247.4.14-HBase-1.2.001.patch, > PHOENIX-5247.4.14.1-HBase-1.2.001.patch > > > We have seen large number of orphan views in our production environments. The > method (doDropTable) that is used to drop tables and views drops only the > first level child views of tables. This seems to be the main root cause for > orphan views. doDropTable() is recursive only when the table type is TABLE or > SYSTEM. The table type for views is VIEW. The findChildViews method returns > the first level child views. So, doDropTable ignores dropping views of views > (i.e., second or higher level views). -- This message was sent by Atlassian JIRA (v7.6.3#76005)