[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data
[ https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-5860: --- Attachment: PHOENIX-5860-4.x.001.patch > Throw exception which region is closing or splitting when delete data > - > > Key: PHOENIX-5860 > URL: https://issues.apache.org/jira/browse/PHOENIX-5860 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.13.1, 4.x >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Blocker > Fix For: 4.x > > Attachments: PHOENIX-5860-4.x.001.patch, PHOENIX-5860-4.x.patch, > PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently delete data is UngroupedAggregateRegionObserver class on server > side, this class check if isRegionClosingOrSplitting is true. when > isRegionClosingOrSplitting is true, will throw new IOException("Temporarily > unable to write from scan because region is closing or splitting"). > when region online , which initialize phoenix CP that > isRegionClosingOrSplitting is false.before region split, region change > isRegionClosingOrSplitting to true.but if region split failed,split will roll > back where not change isRegionClosingOrSplitting to false. after that all > write opration will always throw exception which is Temporarily unable to > write from scan because region is closing or splitting。 > so we should change isRegionClosingOrSplitting to false when region > preRollBackSplit in UngroupedAggregateRegionObserver class。 > A simple test where a data table split failed, then roll back success.but > delete data always throw exception. > # create data table > # bulkload data for this table > # alter hbase-server code, which region split will throw exception , then > rollback. > # use hbase shell , split region > # view regionserver log, where region split failed, and then rollback > success. > # user phoenix sqlline.py for delete data, which will throw exption > Caused by: java.io.IOException: Temporarily unable to write from scan > because region is closing or splitting Caused by: java.io.IOException: > Temporarily unable to write from scan because region is closing or splitting > at > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082) > ... 5 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548) > at > org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) > at > org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) > at > org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) > at > org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) > at > org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) > at > org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249) > at scala.collection.Iterator$class.foreach(Iterator.scala:893) at > org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$
[jira] [Updated] (PHOENIX-6214) client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true
[ https://issues.apache.org/jira/browse/PHOENIX-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-6214: --- Description: after set phoenix.schema.isNamespaceMappingEnabled is true, input “use "SYSTEM"” by sqlline.py,and quit sqlline. there throw NewerSchemaAlreadyExistsException when get into sqlline again. As shown below: org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) at org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) was: after set phoenix.schema.isNamespaceMappingEnabled is true, input “use "SYSTEM"” by sqlline.py,and quit sqlline. throw NewerSchemaAlreadyExistsException when get into sqlline again. As shown below: org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) at org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > client connect throw NEWER_SCHEMA_FOUND exception after > phoenix.schema.isNamespaceMappingEnabled is true > > > Key: PHOENIX-6214 > URL: https://issues.apache.org/jira/browse/PHOENIX-6214 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.13.1 >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Blocker > > after set phoenix.schema.isNamespaceMappingEnabled is true, input “use > "SYSTEM"” by sqlline.py,and quit sqlline. there throw > NewerSchemaAlreadyExistsException when get into sqlline again. As shown below: > org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 > (42M04): Schema with given name already exists > schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: > ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at > org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) > at > org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46) > at > org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at > org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at > org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6214) client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true
Chao Wang created PHOENIX-6214: -- Summary: client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true Key: PHOENIX-6214 URL: https://issues.apache.org/jira/browse/PHOENIX-6214 Project: Phoenix Issue Type: Bug Components: core Affects Versions: 4.13.1 Reporter: Chao Wang Assignee: Chao Wang after set phoenix.schema.isNamespaceMappingEnabled is true, input “use "SYSTEM"” by sqlline.py,and quit sqlline. throw NewerSchemaAlreadyExistsException when get into sqlline again. As shown below: org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) at org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6209) Remove unused estimateParallelLevel()
[ https://issues.apache.org/jira/browse/PHOENIX-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-6209: --- Attachment: (was: PHOENIX-6209.master.v1.patch) > Remove unused estimateParallelLevel() > - > > Key: PHOENIX-6209 > URL: https://issues.apache.org/jira/browse/PHOENIX-6209 > Project: Phoenix > Issue Type: Improvement > Components: core >Affects Versions: 5.1.0, 4.16.0 >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Minor > Fix For: 5.1.0, 4.16.0 > > Attachments: PHOENIX-6209.master.001.patch > > > there is code that like "parallelLevel2 = > CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2 > does not use. so we can remove parallelLevel2. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6209) Remove unused estimateParallelLevel()
[ https://issues.apache.org/jira/browse/PHOENIX-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-6209: --- Attachment: PHOENIX-6209.master.001.patch > Remove unused estimateParallelLevel() > - > > Key: PHOENIX-6209 > URL: https://issues.apache.org/jira/browse/PHOENIX-6209 > Project: Phoenix > Issue Type: Improvement > Components: core >Affects Versions: 5.1.0, 4.16.0 >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Minor > Fix For: 5.1.0, 4.16.0 > > Attachments: PHOENIX-6209.master.001.patch, > PHOENIX-6209.master.v1.patch > > > there is code that like "parallelLevel2 = > CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2 > does not use. so we can remove parallelLevel2. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6033) Unable to add back a parent column that was earlier dropped from a view
[ https://issues.apache.org/jira/browse/PHOENIX-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xinyi Yan updated PHOENIX-6033: --- Fix Version/s: (was: 4.16.0) > Unable to add back a parent column that was earlier dropped from a view > --- > > Key: PHOENIX-6033 > URL: https://issues.apache.org/jira/browse/PHOENIX-6033 > Project: Phoenix > Issue Type: Bug >Affects Versions: 5.0.0, 4.15.0 >Reporter: Chinmay Kulkarni >Priority: Major > Fix For: 5.1.0, 4.16.1, 4.17.0 > > > In 4.14.3, we allowed adding a column (with the same name as a column > inherited from the parent) back to a view, which was dropped in the past. In > 4.x this is no longer allowed. > Start 4.x server and run the following with a 4.x client: > {code:sql} > CREATE TABLE IF NOT EXISTS T (A INTEGER PRIMARY KEY, B INTEGER, C VARCHAR, D > INTEGER); > -- create view > CREATE VIEW IF NOT EXISTS V (VA INTEGER, VB INTEGER) AS SELECT * FROM T WHERE > B=200; > UPSERT INTO V(A,B,C,D,VA,VB) VALUES (2, 200, 'def', -20, 91, 101); > ALTER VIEW V DROP COLUMN C; > SELECT * FROM V; > ++--+--+-+--+ > | A | B | D | VA | VB | > ++--+--+-+--+ > | 2 | 200 | -20 | 91 | 101 | > ++--+--+-+--+ > ALTER VIEW C ADD C VARCHAR; > -- The above add column step throws an error. It used to work before 4.15. > {code} > The stack trace for the error thrown is: > {code:java} > Error: ERROR 1012 (42M03): Table undefined. tableName=C > (state=42M03,code=1012) > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table > undefined. tableName=C > at > org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:777) > at > org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:442) > at > org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:434) > at > org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:425) > at > org.apache.phoenix.compile.FromCompiler.getResolver(FromCompiler.java:277) > at > org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3627) > at > org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1488) > at > org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:415) > at > org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:397) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:396) > at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:384) > at > org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1886) > at sqlline.Commands.execute(Commands.java:814) > at sqlline.Commands.sql(Commands.java:754) > at sqlline.SqlLine.dispatch(SqlLine.java:646) > at sqlline.SqlLine.begin(SqlLine.java:510) > at sqlline.SqlLine.start(SqlLine.java:233) > at sqlline.SqlLine.main(SqlLine.java:175) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-6184) Emit ageOfUnverifiedRow metric during read repairs
[ https://issues.apache.org/jira/browse/PHOENIX-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned PHOENIX-6184: - Assignee: Viraj Jasani > Emit ageOfUnverifiedRow metric during read repairs > -- > > Key: PHOENIX-6184 > URL: https://issues.apache.org/jira/browse/PHOENIX-6184 > Project: Phoenix > Issue Type: Improvement > Components: core >Affects Versions: 4.x >Reporter: Priyank Porwal >Assignee: Viraj Jasani >Priority: Minor > Fix For: 4.x > > > When index reads cause read repairs, it would be useful metric to gauge the > age of such repaired rows. This would help expose potential problems during > the write phase, perhaps to concurrent handling, replication and/or other > bugs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6184) Emit ageOfUnverifiedRow metric during read repairs
[ https://issues.apache.org/jira/browse/PHOENIX-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated PHOENIX-6184: -- Fix Version/s: (was: 4.x) 4.16.0 5.1.0 > Emit ageOfUnverifiedRow metric during read repairs > -- > > Key: PHOENIX-6184 > URL: https://issues.apache.org/jira/browse/PHOENIX-6184 > Project: Phoenix > Issue Type: Improvement > Components: core >Affects Versions: 4.x >Reporter: Priyank Porwal >Assignee: Viraj Jasani >Priority: Minor > Fix For: 5.1.0, 4.16.0 > > > When index reads cause read repairs, it would be useful metric to gauge the > age of such repaired rows. This would help expose potential problems during > the write phase, perhaps to concurrent handling, replication and/or other > bugs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6213) Extend Cell Tags to Delete object.
[ https://issues.apache.org/jira/browse/PHOENIX-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh Shah updated PHOENIX-6213: -- Description: We want to track the source of mutations (especially Deletes) via Phoenix. We have multiple use cases which does the deletes namely: customer deleting the data, internal process like GDPR compliance, Phoenix TTL MR jobs. For every mutations we want to track the source of operation which initiated the deletes. At my day job, we have custom Backup/Restore tool. For example: During GDPR compliance cleanup (lets say at time t0), we mistakenly deleted some customer data and it were possible that customer also deleted some data from their side (at time t1). To recover mistakenly deleted data, we restore from the backup at time (t0 - 1). By doing this, we also recovered the data that customer intentionally deleted. We need a way for Restore tool to selectively recover data. Trying to explain via an example. Lets say there are 2 different systems (lets say accidental-delete and customer-delete) deleting the data from the same table at almost the same time. As the name suggest customer-delete is the intentional delete and accidental-delete is deletes done by mistake. We have restore tool which will restore all the data between start time and end times (start-ts and end-ts). We want to restore the deletes that happened by accidental-delete system and not want to restore the deletes done by customer-delete system. By adding cell tag to Delete Markers, we can not restore data done by customer-delete system. In my proposal, I want to add cell tags to Tombstone delete marker so that we have that tag in the backups. Incase we have to restore data, we can restore specific row depending on the tag present in the cell. We want to leverage Cell Tag feature for Delete mutations to store these metadata. Currently Delete object doesn't support Tag feature. Also we want a solution that can be easily extensible to other mutations like Put. Some of the use cases I can think of where we can use tags for Put mutations are: 1. Identifying whether the put came from primary cluster or replicated cluster so that we can make the backup tool more smarter and not backup the same put twice in source and replicated cluster. 2. We have a multi-tenancy concept in Phoenix. We want to track whether the upsert (put operation in hbase) came from Global or Tenant connection. was: We want to track the source of mutations (especially Deletes) via Phoenix. We have multiple use cases which does the deletes namely: customer deleting the data, internal process like GDPR compliance, Phoenix TTL MR jobs. For every mutations we want to track the source of operation which initiated the deletes. At my day job, we have custom Backup/Restore tool. For example: During GDPR compliance cleanup (lets say at time t0), we mistakenly deleted some customer data and it were possible that customer also deleted some data from their side (at time t1). To recover mistakenly deleted data, we restore from the backup at time (t0 - 1). By doing this, we also recovered the data that customer intentionally deleted. We need a way for Restore tool to selectively recover data. Trying to explain via an example. Lets say there are 2 different systems (lets say accidental-delete and customer-delete) deleting the data from the same table at almost the same time. As the name suggest customer-delete is the intentional delete and accidental-delete is deletes done by mistake. We have restore tool which will restore all the data between start time and end times (start-ts and end-ts). We want to restore the deletes that happened by accidental-delete system and not want to restore the deletes done by customer-delete system. By adding cell tag to Delete Markers, we can not restore data done by customer-delete system. In my proposal, I want to add cell tags to Tombstone delete marker so that we have that tag in the backups. Incase we have to restore data, we can restore specific row depending on the tag present in the cell. We want to leverage Cell Tag feature for Delete mutations to store these metadata. Currently Delete object doesn't support Tag feature. > Extend Cell Tags to Delete object. > -- > > Key: PHOENIX-6213 > URL: https://issues.apache.org/jira/browse/PHOENIX-6213 > Project: Phoenix > Issue Type: New Feature >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > We want to track the source of mutations (especially Deletes) via Phoenix. We > have multiple use cases which does the deletes namely: customer deleting the > data, internal process like GDPR compliance, Phoenix TTL MR jobs. For every > mutations we want to track the source of operation which initiated the > deletes. > At m
[jira] [Moved] (PHOENIX-6213) Extend Cell Tags to Delete object.
[ https://issues.apache.org/jira/browse/PHOENIX-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh Shah moved HBASE-25118 to PHOENIX-6213: --- Fix Version/s: (was: 2.4.0) (was: 1.7.0) (was: 3.0.0-alpha-1) Key: PHOENIX-6213 (was: HBASE-25118) Issue Type: New Feature (was: Improvement) Project: Phoenix (was: HBase) > Extend Cell Tags to Delete object. > -- > > Key: PHOENIX-6213 > URL: https://issues.apache.org/jira/browse/PHOENIX-6213 > Project: Phoenix > Issue Type: New Feature >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > We want to track the source of mutations (especially Deletes) via Phoenix. We > have multiple use cases which does the deletes namely: customer deleting the > data, internal process like GDPR compliance, Phoenix TTL MR jobs. For every > mutations we want to track the source of operation which initiated the > deletes. > At my day job, we have custom Backup/Restore tool. > For example: During GDPR compliance cleanup (lets say at time t0), we > mistakenly deleted some customer data and it were possible that customer also > deleted some data from their side (at time t1). To recover mistakenly deleted > data, we restore from the backup at time (t0 - 1). By doing this, we also > recovered the data that customer intentionally deleted. > We need a way for Restore tool to selectively recover data. > Trying to explain via an example. > Lets say there are 2 different systems (lets say accidental-delete and > customer-delete) deleting the data from the same table at almost the same > time. As the name suggest customer-delete is the intentional delete and > accidental-delete is deletes done by mistake. We have restore tool which will > restore all the data between start time and end times (start-ts and end-ts). > We want to restore the deletes that happened by accidental-delete system and > not want to restore the deletes done by customer-delete system. By adding > cell tag to Delete Markers, we can not restore data done by customer-delete > system. > In my proposal, I want to add cell tags to Tombstone delete marker so that we > have that tag in the backups. Incase we have to restore data, we can restore > specific row depending on the tag present in the cell. > We want to leverage Cell Tag feature for Delete mutations to store these > metadata. Currently Delete object doesn't support Tag feature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-6208) Backport the assembly changes in PHOENIX-6178 to 4.x
[ https://issues.apache.org/jira/browse/PHOENIX-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved PHOENIX-6208. -- Resolution: Fixed Committed. Thanks for the review [~elserj]. > Backport the assembly changes in PHOENIX-6178 to 4.x > > > Key: PHOENIX-6208 > URL: https://issues.apache.org/jira/browse/PHOENIX-6208 > Project: Phoenix > Issue Type: Improvement > Components: core >Affects Versions: 4.16.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 4.16.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)