[jira] [Updated] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit on for deletes

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-4900:

Fix Version/s: 5.0.1

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit on for deletes
> ---
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-4900-4.x-HBase-1.4.patch, PHOENIX-4900.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5069) Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5069:

Fix Version/s: 5.0.1

> Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache
> ---
>
> Key: PHOENIX-5069
> URL: https://issues.apache.org/jira/browse/PHOENIX-5069
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
> Fix For: 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5069-4.14.1-hbase-1.3-phoenix-stats.001.patch, 
> PHOENIX-5069-4.14.1-hbase-1.3-phoenix-stats.002.patch, 
> PHOENIX-5069.4.x-HBase-1.3.001.patch, PHOENIX-5069.4.x-HBase-1.4.001.patch, 
> PHOENIX-5069.master.001.patch, PHOENIX-5069.master.002.patch, 
> PHOENIX-5069.master.003.patch, PHOENIX-5069.master.004.patch, 
> PHOENIX-5069.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> The current Phoenix Stats Cache uses TTL based eviction policy. A cached 
> entry will expire after a given amount of time (900s by default) passed since 
> the entry's been created. This will lead to cache miss when 
> Compiler/Optimizer fetches stats from cache at the next time. As you can see 
> from the above graph, fetching stats from the cache is a blocking operation — 
> when there is cache miss, it has a round trip over the wire to scan the 
> SYSTEM.STATS Table and to get the latest stats info, rebuild the cache and 
> finally return the stats to the Compiler/Optimizer. Whenever there is a cache 
> miss, this blocking call causes significant performance penalty and see 
> periodic spikes.
> *This Jira suggests to use asynchronous refresh mechanism to provide a 
> non-blocking cache. For details, please see the linked design document below.*
> [~karanmehta93] [~twdsi...@gmail.com] [~dbwong] [~elserj] [~an...@apache.org] 
> [~sergey soldatov] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5131) Make spilling to disk for order/group by configurable

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5131:

Fix Version/s: 5.0.1

> Make spilling to disk for order/group by configurable
> -
>
> Key: PHOENIX-5131
> URL: https://issues.apache.org/jira/browse/PHOENIX-5131
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5131-4.x-HBase-1.2.patch, 
> PHOENIX-5131-4.x-HBase-1.3.patch, PHOENIX-5131-4.x-HBase-1.4.patch, 
> PHOENIX-5131-master-v2.patch, PHOENIX-5131-master-v2.patch, 
> PHOENIX-5131-master-v3.patch, PHOENIX-5131-master-v4.patch, 
> PHOENIX-5131-master.patch, PHOENIX-5131-master.patch
>
>
> We've observed that large queries, doing order/group by leading to issues on 
> the regionserver (crashes/long gc pauses/file handler exhaustion etc.). We 
> should make spilling to disk configurable and in case its disabled, fail the 
> query once it hits the spilling limit on any of the region servers. Also make 
> spooling threshold server-side property only to prevent clients from 
> controlling memory allocation on the rs side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4907) IndexScrutinyTool should use empty catalog instead of null

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-4907:

Fix Version/s: 5.0.1

> IndexScrutinyTool should use empty catalog instead of null
> --
>
> Key: PHOENIX-4907
> URL: https://issues.apache.org/jira/browse/PHOENIX-4907
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.15.0, 4.14.1, 5.1.0, 5.0.1
>
> Attachments: PHOENIX-4907.patch
>
>
> Before executing, the index scrutiny tool does a sanity check to make sure 
> that the given data table and index are valid and related to each other. This 
> check uses the JDBC metadata API, and passes in null for the catalog name. 
> Unfortunately, a null entry for catalog causes Phoenix to omit tenant_id from 
> the query against System.Catalog, causing a table scan, which can be lengthy 
> or time out if the server has too many views. 
> It should pass in the empty string for catalog, which will make Phoenix 
> filter on "WHERE tenant_id is NULL", which will avoid the table scan. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5048) Index Rebuilder does not handle INDEX_STATE timestamp check for all index

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5048:

Fix Version/s: 5.0.1

> Index Rebuilder does not handle INDEX_STATE timestamp check for all index
> -
>
> Key: PHOENIX-5048
> URL: https://issues.apache.org/jira/browse/PHOENIX-5048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5048.patch, PHOENIX-5048.v2.patch, 
> PHOENIX-5048.v3.patch, PHOENIX-5048.v4.patch, PHOENIX-5048.v5.patch
>
>
> After rebuilder is finished for Partial Index Rebuild, It will check if Index 
> state has been updated after Upper bound of the scan we use in partial index 
> Rebuild. If that happens then it will fail Index Rebuild as Index write 
> failure occured while we were rebuilding Index.
> {code:java}
> MetaDataEndpointImpl.java#updateIndexState()
> public void updateIndexState(RpcController controller, 
> UpdateIndexStateRequest request,
> RpcCallback done) {
> ...
> // If the index status has been updated after the upper bound of the scan we 
> use
> // to partially rebuild the index, then we need to fail the rebuild because an
> // index write failed before the rebuild was complete.
> if (actualTimestamp > expectedTimestamp) {
> builder.setReturnCode(MetaDataProtos.MutationCode.UNALLOWED_TABLE_MUTATION);
> builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
> done.run(builder.build());
> return;
> }
> ...
> }{code}
> After Introduction of TrackingParallelWriterIndexCommitter 
> [PHOENIX-3815|https://issues.apache.org/jira/browse/PHOENIX-3815], we only 
> disable Index which get failure . Before that , in 
> ParallelWriterIndexCommitter we were disabling all index even if Index 
> failure happens for one Index only. 
> Suppose Data Table has 3 index and above condition becomes true for first 
> index , then we won't even check for remain two Index.
> {code:java}
> MetaDataRegionObserver.java#BuildIndexScheduleTask.java#run()
> for (PTable indexPTable : indexesToPartiallyRebuild) {
> String indexTableFullName = SchemaUtil.getTableName(
> indexPTable.getSchemaName().getString(),
> indexPTable.getTableName().getString());
> if (scanEndTime == latestUpperBoundTimestamp) {
> IndexUtil.updateIndexState(conn, indexTableFullName, PIndexState.ACTIVE, 0L, 
> latestUpperBoundTimestamp);
> batchExecutedPerTableMap.remove(dataPTable.getName());
> LOG.info("Making Index:" + indexPTable.getTableName() + " active after 
> rebuilding");
> } else {
> // Increment timestamp so that client sees updated disable timestamp
> IndexUtil.updateIndexState(conn, indexTableFullName, 
> indexPTable.getIndexState(), scanEndTime * signOfDisableTimeStamp, 
> latestUpperBoundTimestamp);
> Long noOfBatches = batchExecutedPerTableMap.get(dataPTable.getName());
> if (noOfBatches == null) {
> noOfBatches = 0l;
> }
> batchExecutedPerTableMap.put(dataPTable.getName(), ++noOfBatches);
> LOG.info("During Round-robin build: Successfully updated index disabled 
> timestamp for "
> + indexTableFullName + " to " + scanEndTime);
> }
> }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5122:

Fix Version/s: 5.0.1

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.13.0, 4.13.1, 4.15.0, 4.14.1, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, PHOENIX-5122.patch, 
> Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 2019-03-04 at 6.21.10 
> PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5101) ScanningResultIterator getScanMetrics throws NPE

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5101:

Fix Version/s: 5.0.1

> ScanningResultIterator getScanMetrics throws NPE
> 
>
> Key: PHOENIX-5101
> URL: https://issues.apache.org/jira/browse/PHOENIX-5101
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Reid Chan
>Assignee: Karan Mehta
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5101.414-HBase-1.4.001.patch, PHOENIX-5101.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:92)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:79)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1439)
>   at 
> org.apache.phoenix.iterate.MergeSortResultIterator.close(MergeSortResultIterator.java:44)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:176)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:807)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:81)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.prepareAndExecute(JdbcMeta.java:759)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:206)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:927)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:879)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
>   at 
> org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5213:

Fix Version/s: 5.0.1

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 5.0.1
>
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5126) RegionScanner leak leading to store files not getting cleared

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5126:

Fix Version/s: 5.0.1

> RegionScanner leak leading to store files not getting cleared
> -
>
> Key: PHOENIX-5126
> URL: https://issues.apache.org/jira/browse/PHOENIX-5126
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5126-master.patch
>
>
> Having a regionScanner open indefinitely (due to any error condition before 
> the close) leads to the store files not getting cleared after compaction 
> since the already open scanner will have the store file referenced. Any 
> subsequent flushed files for the region also get opened by the scanner and 
> wont be cleared.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5207) Create index if not exists fails incorrectly if table has 'maxIndexesPerTable' indexes already

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5207:

Fix Version/s: 5.0.1

> Create index if not exists fails incorrectly if table has 
> 'maxIndexesPerTable' indexes already 
> ---
>
> Key: PHOENIX-5207
> URL: https://issues.apache.org/jira/browse/PHOENIX-5207
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5207-4.14-HBase-1.4.patch, 
> PHOENIX-5207-master.patch
>
>
> If a table has 'maxIndexesPerTable' indexes and we try to create another one 
> and if it already exists we should not throw 'ERROR 1047 (43A04): Too many 
> indexes have already been created' since weve put 'if not exists' already in 
> the statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5184) HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5184:

Fix Version/s: 5.0.1

> HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and 
> PhoenixConfigurationUtil
> -
>
> Key: PHOENIX-5184
> URL: https://issues.apache.org/jira/browse/PHOENIX-5184
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5184-4.x-HBase-1.3-v1.patch, 
> PHOENIX-5184-4.x-HBase-1.3.patch, PHOENIX-5184-v1.patch, PHOENIX-5184.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I was debugging a connection leak issue and ran into a few areas where there 
> are connection leaks. I decided to take a broader look overall and see if 
> there were other places where we leak connections and found some candidates. 
> This is by no means an exhaustive search for connection leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5217) Incorrect result for COUNT DISTINCT limit

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5217:

Fix Version/s: 5.0.1

> Incorrect result for COUNT DISTINCT limit 
> --
>
> Key: PHOENIX-5217
> URL: https://issues.apache.org/jira/browse/PHOENIX-5217
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: 4.14.1: incorrect
> 4.6: correct.
>  
>Reporter: Chen Feng
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5217-4.14-HBase-1.4.patch, 
> PHOENIX-5217_v1-4.x-HBase-1.4.patch, PHOENIX-5217_v2-master.patch
>
>
> For table t1(pk1, col1, CONSTRAINT(pk1))
> upsert into "t1" values (1, 1);
>  upsert into "t1" values (2, 2);
> sql A: select count("pk1") from "t1" limit 1, return 2 [correct]
> sql B: select count(disctinct("pk1")) from "t1" limit 1, return 1 [incorrect]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Fix Version/s: 5.0.1

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5169-master-v2.patch, 
> PHOENIX-5169-master-v3.patch, PHOENIX-5169-master-v4.patch, 
> PHOENIX-5169-master.patch, image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5094:

Fix Version/s: 5.0.1

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-4.14-HBase-1.3.05.patch, 
> PHOENIX-5094-master.01.patch, PHOENIX-5094-master.02.patch, 
> PHOENIX-5094-master.03.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5188) IndexedKeyValue should populate KeyValue fields

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5188:

Fix Version/s: 5.0.1

> IndexedKeyValue should populate KeyValue fields
> ---
>
> Key: PHOENIX-5188
> URL: https://issues.apache.org/jira/browse/PHOENIX-5188
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5188-4.x-HBase-1.4..addendum.patch, 
> PHOENIX-5188-4.x-HBase-1.4.patch, PHOENIX-5188.patch
>
>
> IndexedKeyValue subclasses the HBase KeyValue class, which has three primary 
> fields: bytes, offset, and length. These fields aren't populated by 
> IndexedKeyValue because it's concerned with index mutations, and has its own 
> fields that its own methods use. 
> However, KeyValue and its Cell interface have quite a few methods that assume 
> these fields are populated, and the HBase-level factory methods generally 
> ensure they're populated. Phoenix code should do the same, to maintain the 
> polymorphic contract. This is important in cases like custom 
> ReplicationEndpoints where HBase-level code may be iterating over WALEdits 
> that contain both KeyValues and IndexKeyValues and may need to interrogate 
> their contents. 
> Since the index mutation has a row key, this is straightforward. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5018) Index mutations created by UPSERT SELECT will have wrong timestamps

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5018:

Fix Version/s: 5.0.1

> Index mutations created by UPSERT SELECT will have wrong timestamps
> ---
>
> Key: PHOENIX-5018
> URL: https://issues.apache.org/jira/browse/PHOENIX-5018
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5018.4.x-HBase-1.3.001.patch, 
> PHOENIX-5018.4.x-HBase-1.3.002.patch, PHOENIX-5018.4.x-HBase-1.4.001.patch, 
> PHOENIX-5018.4.x-HBase-1.4.002.patch, PHOENIX-5018.master.001.patch, 
> PHOENIX-5018.master.002.patch, PHOENIX-5018.master.003.patch, 
> PHOENIX-5018.master.004.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When doing a full rebuild (or initial async build) of a local or global index 
> using IndexTool and PhoenixIndexImportDirectMapper, or doing a synchronous 
> initial build of a global index using the index create DDL, we generate the 
> index mutations by using an UPSERT SELECT query from the base table to the 
> index.
> The timestamps of the mutations use the default HBase behavior, which is to 
> take the current wall clock. However, the timestamp of an index KeyValue 
> should use the timestamp of the initial KeyValue in the base table.
> Having base table and index timestamps out of sync can cause all sorts of 
> weird side effects, such as if the base table has data with an expired TTL 
> that isn't expired in the index yet. Also inserting old mutations with new 
> timestamps may overwrite the data that has been newly overwritten by the 
> regular data path during index build, which would lead to data loss and 
> inconsistency issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5226:

Fix Version/s: 5.0.1

>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1.0, 5.0.1
>
> Attachments: PHOENIX-5226-master-v2.patch, 
> PHOENIX-5226-master-v3.patch, PHOENIX-5226-master.patch, Screen Shot 
> 2019-04-01 at 16.09.23.png, Screen Shot 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}KeyValue can optionally contain Tags. When it contains tags, it is 
> added in the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . tagtype is one 
> byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag bytes length.{quote}
> The CATALOG table will be badly affected. Some errors will be caused when 
> reads CATALOG table.
>  
> {code:java}
> 0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error 
> -1 (0) : Error while executing SQL "drop view "test_2"": Remote driver 
> error: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at 
> org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5138) ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5138:

Fix Version/s: 5.0.1

> ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones 
> created before it
> --
>
> Key: PHOENIX-5138
> URL: https://issues.apache.org/jira/browse/PHOENIX-5138
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 5.0.1
>
> Attachments: PHOENIX-5138-v2.patch, PHOENIX-5138-v3.patch, 
> PHOENIX-5138-v4.patch, PHOENIX-5138.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> PHOENIX-5132 changed the ViewIndexId generation logic to use one sequence per 
> physical view index table, whereas before it had been tenant + physical 
> table. This removed the possibility of a tenant view index and a global view 
> index having colliding ViewIndexIds.
> However, existing Phoenix environments may have already created tenant-owned 
> view index ids using the old sequence, and under PHOENIX-5132 if they create 
> another, its ViewIndexId will got back to MIN_VALUE, which could cause a 
> collision with an existing view index id. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4759) During restart RS that hosts SYSTEM.CATALOG table may get stuck

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-4759:

Fix Version/s: 5.0.1

> During restart RS that hosts SYSTEM.CATALOG table may get stuck
> ---
>
> Key: PHOENIX-4759
> URL: https://issues.apache.org/jira/browse/PHOENIX-4759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0, 5.0.1
>
> Attachments: PHOENIX-4759-1.patch, PHOENIX-4759-2.master.patch
>
>
> Sometimes when a cluster has restarted the regions that belong to 
> SYSTEM.CATALOG and other system tables on the same RS may be stuck in RiT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5055:

Fix Version/s: 5.0.1

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0, 4.14.2, 5.0.1
>
> Attachments: ConcurrentTest.java, 
> PHOENIX-5055-4.x-HBase-1.4-v2.patch, PHOENIX-5055-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5055-4.x-HBase-1.4-v4.patch, PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
> [^ConcurrentTest.java] produced some random upsert/delete SQL and 
> concurrently executed, some SQL snippets as follows:
> {code:java}
> 1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
> ('3826','2563','3052','3170','3767');
> 1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('2563','4926','3526','678',null,null,'1617');
> 2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('1052','2563','1120','2314','1456',null,null);
> 2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
> ('1922','146',null,'469','2563');
> 2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;
> {code}
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
> Debugged the mutations of batches on the server side. the DeleteColumns and 
> Puts were splitted into the different batches for the once upsert,  the 
> DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
> timestamp is larger than DeleteFaimly under multiple threads.
> !https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5266) Client can only write on Index Table and skip data table if failure happens because of region split/move etc

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5266:

Fix Version/s: 5.0.1

> Client can only write on Index Table and skip data table if failure happens 
> because of region split/move etc
> 
>
> Key: PHOENIX-5266
> URL: https://issues.apache.org/jira/browse/PHOENIX-5266
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1, 5.1.0, 4.14.2
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5266-4.x-HBase-1.3.01.patch, 
> PHOENIX-5266-4.x-HBase-1.3.02.patch, PHOENIX-5266.01.patch, 
> PHOENIX-5266.patch, PHOENIX-5266.patch
>
>
> With Phoenix 4.14.1 client, There is a scenario where client would skip data 
> table write but do successful index table write. In this case, we should 
> treat it as Data loss scenario.
>  
> Relevant code path :-
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L994-L1043]
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1089-L1109]
>  
> Here is what happens :-
>  * Consider below assumptions for scenario :- 
>  ** max no row in single batch = 100
>  ** max size of batch = 2 MB
>  * When client faces SQLException Code 1121, it sets variable 
> shouldRetryIndexedMutation=true.
>  * In scenarios where client sends batch of 100 rows only as per 
> configuration, but batch size is >2 MB, MutationState.java#991 will split 
> this 100 row batch into multiple smaller batches which are <2MB.
>  ** MutationState.java#991 :- 
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L991]
>  * Suppose there are 5 batches of 20 rows but client faces 1121 
> SQLExceptionCode on 2nd batch , then it will set 
> shouldRetryIndexedMutation=true and it will retry all 5 batches again with 
> only Index updates. This will results in rows missing from Data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5132) View indexes with different owners but of the same base table can be assigned same ViewIndexId

2019-05-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5132:

Fix Version/s: 5.0.1

> View indexes with different owners but of the same base table can be assigned 
> same ViewIndexId
> --
>
> Key: PHOENIX-5132
> URL: https://issues.apache.org/jira/browse/PHOENIX-5132
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Critical
> Fix For: 4.15.0, 5.1.0, 5.0.1
>
> Attachments: PHOENIX-5132-4.x-HBase-1.4.patch, 
> PHOENIX-5132-4.x-HBase-1.4.v2.patch, PHOENIX-5132-repro.patch
>
>
> All indexes on views for a particular base table are stored in the same 
> physical HBase table. Phoenix distinguishes them by prepending each row key 
> with an encoded short or long integer called a ViewIndexId. 
> The ViewIndexId is generated by using a sequence to guarantee that each view 
> index id is unique. Unfortunately, the sequence used follows a convention of 
> [SaltByte, Tenant, Schema, BaseTable] for its key, which means that there's a 
> separate sequence for each tenant that owns an index in the view index table. 
> (See MetaDataUtil.getViewIndexSequenceKey) Since all the sequences start at 
> the same value, collisions are not only possible but likely. 
> I've written a test that confirms the ViewIndexId collision. This means it's 
> very likely that query results using one view index could mistakenly include 
> rows from another index, but I haven't confirmed this. 
> All view indexes for a base table, regardless of whether globally or 
> tenant-owned, should use the same sequence. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5268) HBase 2.0.5 compatibility

2019-05-07 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5268:

Comment: was deleted

(was: {quote}
KeyValue is a class marked as Private so by contract is not supposed to be 
extended and we can break you at any time. Use Cell, which is a supported 
Public interface.
{quote}
[~apurtell] IndexedKeyValue is used on the server side,  we must implement 
ExtendedCell for the server side Cell, but ExtendedCell also is a Private 
class, it seems that only use Cell is not enough, right?)

> HBase 2.0.5 compatibility
> -
>
> Key: PHOENIX-5268
> URL: https://issues.apache.org/jira/browse/PHOENIX-5268
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: jaanai
>Priority: Blocker
> Fix For: 5.1.0
>
>
> HBASE-21754 introduces a new abstract method to RpcScheduler: 
> {{getMetaPriorityQueueLength()}}
> This means that Phoenix does not build against HBase 2.0.5.
> FYI [~twdsi...@gmail.com], [~Jaanai].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5268) HBase 2.0.5 compatibility

2019-05-07 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai reassigned PHOENIX-5268:
---

Assignee: jaanai

> HBase 2.0.5 compatibility
> -
>
> Key: PHOENIX-5268
> URL: https://issues.apache.org/jira/browse/PHOENIX-5268
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: jaanai
>Priority: Blocker
> Fix For: 5.1.0
>
>
> HBASE-21754 introduces a new abstract method to RpcScheduler: 
> {{getMetaPriorityQueueLength()}}
> This means that Phoenix does not build against HBase 2.0.5.
> FYI [~twdsi...@gmail.com], [~Jaanai].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5250) The accumulated wal files can't be cleaned

2019-04-29 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5250:

Affects Version/s: (was: 5.0.0)

> The accumulated wal files can't be cleaned
> --
>
> Key: PHOENIX-5250
> URL: https://issues.apache.org/jira/browse/PHOENIX-5250
> Project: Phoenix
>  Issue Type: Bug
>Reporter: jaanai
>Priority: Blocker
> Attachments: image-2019-04-19-21-31-27-888.png
>
>
> Because of the modification of HBASE-20781,  the faked WALEdits won't be 
> filtered, all WALEdits will be saved into Memstore with a status that 
> inMemstore is true(uses WAL->append method).
> !image-2019-04-19-21-31-27-888.png|width=755,height=310!
> The family array of IndexedKeyValue is WALEdit.METAFAMILY that is used to 
> describe a fake WALEdit, and it will put into Memstore with WALedits of data 
> table during sync global index.
> WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
> decrease the percent of disk free.  
> !https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5250) The accumulated wal files can't be cleaned

2019-04-19 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5250:

Summary: The accumulated wal files can't be cleaned  (was: The data of WAL 
files accumulates and can't be cleaned)

> The accumulated wal files can't be cleaned
> --
>
> Key: PHOENIX-5250
> URL: https://issues.apache.org/jira/browse/PHOENIX-5250
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Priority: Blocker
> Attachments: image-2019-04-19-21-31-27-888.png
>
>
> Because of the modification of HBASE-20781,  the faked WALEdits won't be 
> filtered, all WALEdits will be saved into Memstore with a status that 
> inMemstore is true(uses WAL->append method).
> !image-2019-04-19-21-31-27-888.png|width=755,height=310!
> The family array of IndexedKeyValue is WALEdit.METAFAMILY that is used to 
> describe a fake WALEdit, and it will put into Memstore with WALedits of data 
> table during sync global index.
> WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
> decrease the percent of disk free.  
> !https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5250) The data of WAL files accumulates and can't be cleaned

2019-04-19 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5250:

Description: 
Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=755,height=310!

The family array of IndexedKeyValue is WALEdit.METAFAMILY that is used to 
describe a fake WALEdit, and it will put into Memstore with WALedits of data 
table during sync global index.

WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 

  was:
Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=755,height=310!

The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 


> The data of WAL files accumulates and can't be cleaned
> --
>
> Key: PHOENIX-5250
> URL: https://issues.apache.org/jira/browse/PHOENIX-5250
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Priority: Blocker
> Attachments: image-2019-04-19-21-31-27-888.png
>
>
> Because of the modification of HBASE-20781,  the faked WALEdits won't be 
> filtered, all WALEdits will be saved into Memstore with a status that 
> inMemstore is true(uses WAL->append method).
> !image-2019-04-19-21-31-27-888.png|width=755,height=310!
> The family array of IndexedKeyValue is WALEdit.METAFAMILY that is used to 
> describe a fake WALEdit, and it will put into Memstore with WALedits of data 
> table during sync global index.
> WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
> decrease the percent of disk free.  
> !https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5250) The data of WAL files accumulates and can't be cleaned

2019-04-19 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5250:

Description: 
Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=755,height=310!

The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 

  was:
Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=885,height=360!

The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 


> The data of WAL files accumulates and can't be cleaned
> --
>
> Key: PHOENIX-5250
> URL: https://issues.apache.org/jira/browse/PHOENIX-5250
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Priority: Blocker
> Attachments: image-2019-04-19-21-31-27-888.png
>
>
> Because of the modification of HBASE-20781,  the faked WALEdits won't be 
> filtered, all WALEdits will be saved into Memstore with a status that 
> inMemstore is true(uses WAL->append method).
> !image-2019-04-19-21-31-27-888.png|width=755,height=310!
> The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
> decrease the percent of disk free.  
> !https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5250) The data of WAL files accumulates and can't be cleaned

2019-04-19 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5250:

Description: 
Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=885,height=360!

The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 

  was:
Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=785,height=260!

The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 


> The data of WAL files accumulates and can't be cleaned
> --
>
> Key: PHOENIX-5250
> URL: https://issues.apache.org/jira/browse/PHOENIX-5250
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Priority: Blocker
> Attachments: image-2019-04-19-21-31-27-888.png
>
>
> Because of the modification of HBASE-20781,  the faked WALEdits won't be 
> filtered, all WALEdits will be saved into Memstore with a status that 
> inMemstore is true(uses WAL->append method).
> !image-2019-04-19-21-31-27-888.png|width=885,height=360!
> The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
> decrease the percent of disk free.  
> !https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5250) The data of WAL files accumulates and can't be cleaned

2019-04-19 Thread jaanai (JIRA)
jaanai created PHOENIX-5250:
---

 Summary: The data of WAL files accumulates and can't be cleaned
 Key: PHOENIX-5250
 URL: https://issues.apache.org/jira/browse/PHOENIX-5250
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: jaanai
 Attachments: image-2019-04-19-21-31-27-888.png

Because of the modification of HBASE-20781,  the faked WALEdits won't be 
filtered, all WALEdits will be saved into Memstore with a status that 
inMemstore is true(uses WAL->append method).

!image-2019-04-19-21-31-27-888.png|width=785,height=260!

The WAL files can't be cleaned, expect for resarting RS, Many WAL files will 
decrease the percent of disk free.  

!https://gw.alicdn.com/tfscom/TB1n3cDQVzqK1RjSZFCXXbbxVXa.png|width=422,height=159!

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai resolved PHOENIX-5226.
-
Resolution: Fixed

>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5226-master-v2.patch, 
> PHOENIX-5226-master-v3.patch, PHOENIX-5226-master.patch, Screen Shot 
> 2019-04-01 at 16.09.23.png, Screen Shot 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}KeyValue can optionally contain Tags. When it contains tags, it is 
> added in the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . tagtype is one 
> byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag bytes length.{quote}
> The CATALOG table will be badly affected. Some errors will be caused when 
> reads CATALOG table.
>  
> {code:java}
> 0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error 
> -1 (0) : Error while executing SQL "drop view "test_2"": Remote driver 
> error: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at 
> org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-09 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5226:

Attachment: PHOENIX-5226-master-v3.patch

>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5226-master-v2.patch, 
> PHOENIX-5226-master-v3.patch, PHOENIX-5226-master.patch, Screen Shot 
> 2019-04-01 at 16.09.23.png, Screen Shot 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}KeyValue can optionally contain Tags. When it contains tags, it is 
> added in the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . tagtype is one 
> byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag bytes length.{quote}
> The CATALOG table will be badly affected. Some errors will be caused when 
> reads CATALOG table.
>  
> {code:java}
> 0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error 
> -1 (0) : Error while executing SQL "drop view "test_2"": Remote driver 
> error: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at 
> org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-07 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5226:

Attachment: PHOENIX-5226-master-v2.patch

>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5226-master-v2.patch, PHOENIX-5226-master.patch, 
> Screen Shot 2019-04-01 at 16.09.23.png, Screen Shot 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}KeyValue can optionally contain Tags. When it contains tags, it is 
> added in the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . tagtype is one 
> byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag bytes length.{quote}
> The CATALOG table will be badly affected. Some errors will be caused when 
> reads CATALOG table.
>  
> {code:java}
> 0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error 
> -1 (0) : Error while executing SQL "drop view "test_2"": Remote driver 
> error: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at 
> org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-03 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5226:

Attachment: PHOENIX-5226-master.patch

>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5226-master.patch, Screen Shot 2019-04-01 at 
> 16.09.23.png, Screen Shot 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}KeyValue can optionally contain Tags. When it contains tags, it is 
> added in the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . tagtype is one 
> byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag bytes length.{quote}
> The CATALOG table will be badly affected. Some errors will be caused when 
> reads CATALOG table.
>  
> {code:java}
> 0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error 
> -1 (0) : Error while executing SQL "drop view "test_2"": Remote driver 
> error: RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at 
> org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at 
> org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198)
>  at 
> org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-02 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5226:

Description: 
We use a tag of cell to indicat that some properties should not be derived from 
the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a tag 
bytes, but the format is incorrect, the below is a reference from KeyValue 
interface:
{quote}KeyValue can optionally contain Tags. When it contains tags, it is added 
in the byte array after
 * the value part. The format for this part is: 
.
 * tagslength maximum is Short.MAX_SIZE. The 
tagsbytes
 * contain one or more tags where as each tag is of the form
 * . tagtype is one byte
 * and taglength maximum is Short.MAX_SIZE and it 
includes 1 byte type
 * length and actual tag bytes length.{quote}
The CATALOG table will be badly affected. Some errors will be caused when reads 
CATALOG table.

 
{code:java}
0: jdbc:phoenix:thin:url=http://localhost> drop view "test_2"; Error: Error -1 
(0) : Error while executing SQL "drop view "test_2"": Remote driver error: 
RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: test_2: 4 at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:114) at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2729)
 at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17078)
 at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8210) 
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2475)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2457)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418) at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:136) at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
Caused by: java.lang.ArrayIndexOutOfBoundsException: 4 at 
org.apache.hadoop.hbase.ArrayBackedTag.(ArrayBackedTag.java:97) at 
org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1107) at 
org.apache.hadoop.hbase.CellUtil$5.next(CellUtil.java:1094) at 
org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.isCellTTLExpired(ScanQueryMatcher.java:153)
 at 
org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.preCheck(ScanQueryMatcher.java:198)
 at 
org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher.match(NormalUserScanQueryMatcher.java:64)
 at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:578)
{code}
 

  was:
We use a tag of cell to indicat that some properties should not be derived from 
the base table for view table.  VIEW_MODIFIED_PROPERTY_BYTES is used a tag 
bytes, but the format is incorrect, the below is a reference from KeyValue 
interface:

{quote}
KeyValue can optionally contain Tags. When it contains tags, it is added in the 
byte array after
 * the value part. The format for this part is: 
.
 * tagslength maximum is Short.MAX_SIZE. The 
tagsbytes
 * contain one or more tags where as each tag is of the form
 * . 
tagtype is one byte
 * and taglength maximum is Short.MAX_SIZE and it 
includes 1 byte type
 * length and actual tag bytes length.
{quote}

The CATALOG table will be badly affected. Some errors will be caused when reads 
CATALOG table.


>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Attachments: Screen Shot 2019-04-01 at 16.09.23.png, Screen Shot 
> 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table. VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}KeyValue can optionally contain Tags. When it contains tags, it is 
> added in the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . tagtype is one 
> byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag

[jira] [Created] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-02 Thread jaanai (JIRA)
jaanai created PHOENIX-5226:
---

 Summary:  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect 
as a tag of the cell
 Key: PHOENIX-5226
 URL: https://issues.apache.org/jira/browse/PHOENIX-5226
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
 Environment: 


{panel:title=My title}
Some text with a title
{panel}

Reporter: jaanai
Assignee: jaanai
 Attachments: Screen Shot 2019-04-01 at 16.09.23.png, Screen Shot 
2019-04-01 at 16.13.10.png

We use a tag of cell to indicat that some properties should not be derived from 
the base table for view table.  VIEW_MODIFIED_PROPERTY_BYTES is used a tag 
bytes, but the format is incorrect, the below is a reference from KeyValue 
interface:

{quote}
KeyValue can optionally contain Tags. When it contains tags, it is added in the 
byte array after
 * the value part. The format for this part is: 
.
 * tagslength maximum is Short.MAX_SIZE. The 
tagsbytes
 * contain one or more tags where as each tag is of the form
 * . 
tagtype is one byte
 * and taglength maximum is Short.MAX_SIZE and it 
includes 1 byte type
 * length and actual tag bytes length.
{quote}

The CATALOG table will be badly affected. Some errors will be caused when reads 
CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5226) The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell

2019-04-02 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5226:

Environment: (was: 


{panel:title=My title}
Some text with a title
{panel}
)

>  The format of VIEW_MODIFIED_PROPERTY_BYTES is incorrect as a tag of the cell
> -
>
> Key: PHOENIX-5226
> URL: https://issues.apache.org/jira/browse/PHOENIX-5226
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Attachments: Screen Shot 2019-04-01 at 16.09.23.png, Screen Shot 
> 2019-04-01 at 16.13.10.png
>
>
> We use a tag of cell to indicat that some properties should not be derived 
> from the base table for view table.  VIEW_MODIFIED_PROPERTY_BYTES is used a 
> tag bytes, but the format is incorrect, the below is a reference from 
> KeyValue interface:
> {quote}
> KeyValue can optionally contain Tags. When it contains tags, it is added in 
> the byte array after
>  * the value part. The format for this part is: 
> .
>  * tagslength maximum is Short.MAX_SIZE. The 
> tagsbytes
>  * contain one or more tags where as each tag is of the form
>  * . 
> tagtype is one byte
>  * and taglength maximum is Short.MAX_SIZE and it 
> includes 1 byte type
>  * length and actual tag bytes length.
> {quote}
> The CATALOG table will be badly affected. Some errors will be caused when 
> reads CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-03-26 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: PHOENIX-5169-master-v4.patch

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master-v2.patch, 
> PHOENIX-5169-master-v3.patch, PHOENIX-5169-master-v4.patch, 
> PHOENIX-5169-master.patch, image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-03-18 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: PHOENIX-5169-master-v3.patch

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master-v2.patch, 
> PHOENIX-5169-master-v3.patch, PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: (was: PHOENIX-5171-master-v2.patch)

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: PHOENIX-5171-master-v2.patch

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: PHOENIX-5169-master-v2.patch

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master-v2.patch, PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-03-11 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: (was: PHOENIX-5169-master-2.patch)

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: PHOENIX-5171-master-v2.patch

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: (was: PHOENIX-5171-master-v2.patch)

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Description: 
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by incorrect next cell hint, due to we have skipped the rest of 
solts that some key ranges contain all values(EVERYTHING_RANGE) in 
ScanUtil.setKey method. The next cell hint of current case is 
_kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
_kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



  was:
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by incorrect next cell hint, due to we  have skipped the rest of 
solt that some key ranges contains all values(EVERYTHING_RANGE) in 
ScanUtil.setKey method. The next cell hint of this case is 
`kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x0

[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Description: 
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by incorrect next cell hint, due to we  have skipped the rest of 
solt that some key ranges contains all values(EVERYTHING_RANGE) in 
ScanUtil.setKey method. The next cell hint of this case is 
`kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463`
 

adding skipped row into nextCellHintMap. Actually,  due to we don't store NULL 
at the end of the key for the variable data type,  these keys should be skipped 
when invokes filterKeyValue,  because they are smaller than the rest of the 
positions of the slots.

  was:
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by incorrect next ce

[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Description: 
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by incorrect next cell hint. 

adding skipped row into nextCellHintMap. Actually,  due to we don't store NULL 
at the end of the key for the variable data type,  these keys should be skipped 
when invokes filterKeyValue,  because they are smaller than the rest of the 
positions of the slots.

  was:
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
don't store NULL at the end of the key for the variable data type,  these keys 
should be skipped when invokes filterKeyValue,  because they are smaller than 
the rest of the positions of the slots.


> SkipScan incor

[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: PHOENIX-5171-master-v2.patch

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
> don't store NULL at the end of the key for the variable data type,  these 
> keys should be skipped when invokes filterKeyValue,  because they are smaller 
> than the rest of the positions of the slots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2019-03-10 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Summary: SkipScan incorrectly filters composite primary key which the key 
range contains all values  (was: SkipScan incorrectly filters composite primary 
key which the trailing is NULL )

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
> don't store NULL at the end of the key for the variable data type,  these 
> keys should be skipped when invokes filterKeyValue,  because they are smaller 
> than the rest of the positions of the slots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the trailing is NULL

2019-03-04 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Description: 
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
don't store NULL at the end of the key for the variable data type,  these keys 
should be skipped when invokes filterKeyValue,  because they are smaller than 
the rest of the positions of the slots.

  was:
Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT vdate FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
don't store NULL at the end of the key for the variable data type,  these keys 
should be skipped when invokes filterKeyValue,  because they are smaller than 
the rest of the positions of the slots.


> SkipScan incorrectly filters composite

[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-02-28 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: PHOENIX-5169-master-2.patch

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1
>
> Attachments: PHOENIX-5169-master-2.patch, PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the trailing is NULL

2019-02-28 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Fix Version/s: (was: 5.0.0)
   5.1.0

> SkipScan incorrectly filters composite primary key which the trailing is NULL 
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT vdate FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
> don't store NULL at the end of the key for the variable data type,  these 
> keys should be skipped when invokes filterKeyValue,  because they are smaller 
> than the rest of the positions of the slots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the trailing is NULL

2019-02-28 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5171:

Attachment: PHOENIX-5171-master.patch

> SkipScan incorrectly filters composite primary key which the trailing is NULL 
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Attachments: PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT vdate FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
> don't store NULL at the end of the key for the variable data type,  these 
> keys should be skipped when invokes filterKeyValue,  because they are smaller 
> than the rest of the positions of the slots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-02-28 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Attachment: PHOENIX-5169-master.patch

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Attachments: PHOENIX-5169-master.patch, 
> image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the trailing is NULL

2019-02-27 Thread jaanai (JIRA)
jaanai created PHOENIX-5171:
---

 Summary: SkipScan incorrectly filters composite primary key which 
the trailing is NULL 
 Key: PHOENIX-5171
 URL: https://issues.apache.org/jira/browse/PHOENIX-5171
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: jaanai
Assignee: jaanai


Running the below SQL:
{code:sql}
create table if not exists aiolos(
vdate varchar,
tab varchar,
dev tinyint not null,
app varchar,
target varchar,
channel varchar,
one varchar,
two varchar,
count1 integer,
count2 integer,
CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));

upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
upsert into aiolos 
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);

SELECT vdate FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
'2019-02-19' AND tab = 'channel_agg' and channel='A004';
{code}
Throws exception:
{code:java}
Caused by: java.lang.IllegalStateException: The next hint must come after 
previous hint 
(prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
 
kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
at 
org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
at 
org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 8 more
{code}
The caused by adding skipped row into nextCellHintMap. Actually,  due to we 
don't store NULL at the end of the key for the variable data type,  these keys 
should be skipped when invokes filterKeyValue,  because they are smaller than 
the rest of the positions of the slots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-02-27 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-5169:

Description: 
we will still invoke createQueryLogger in PhoenixStatement for each query when 
query logger level is OFF, which has significant throughput impacts under 
multiple threads.

The below is jstack with the concurrent query:

!https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!

 

  was:
we will still invoke createQueryLogger in PhoenixStatement for each query when 
query logger level is OFF, which has significant throughput impacts under 
multiple threads.

 

The below is jstack with the concurrent query:

!https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!

 


> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Attachments: image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-02-27 Thread jaanai (JIRA)
jaanai created PHOENIX-5169:
---

 Summary: Query logger is still initialized for each query when the 
log level is off
 Key: PHOENIX-5169
 URL: https://issues.apache.org/jira/browse/PHOENIX-5169
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: jaanai
Assignee: jaanai
 Attachments: image-2019-02-28-10-05-00-518.png

we will still invoke createQueryLogger in PhoenixStatement for each query when 
query logger level is OFF, which has significant throughput impacts under 
multiple threads.

 

The below is jstack with the concurrent query:

!https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4915) The client gets stuck when using same rows concurrently writing data table

2019-01-02 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai resolved PHOENIX-4915.
-
Resolution: Not A Problem

> The client gets stuck when using same rows concurrently writing data table
> --
>
> Key: PHOENIX-4915
> URL: https://issues.apache.org/jira/browse/PHOENIX-4915
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0, 4.14.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Attachments: image-2018-09-21-19-30-12-989.png, test.java, test.sql
>
>
> The client has got stuck when using the multi-thread writes the same rows 
> data into a data table which has a global index.
> I find that rows lock of the data table will not be released under highly 
> writing load and throwing " ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to 
> find cached index metadata." exception information. Most of the threads will 
> be waiting for getting the row lock in Jstack information.
> The following are exceptions on the server side:
> {code:java}
> [B.defaultRpcServer.handler=37,queue=1,port=16020] 
> regionserver.RSRpcServices(103): Failed doing multi operation, current call 
> is : callId: 3455 service: ClientService meth
> odName: Multi size: 23.1 K connection: 192.168.199.7:52050 param: 
> actionCount=44#regionCount=8#LOCK,\x02,1537434393195.ee6d441a04ee6a59b24262f22f618d88.#
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-727998515684050837 
> region=LOCK,\x0E,1537434393195.f4de29d4b36775589a49f
> 1c7a20c73a2.host=hb-bp1v2q830426r6763-004.hbase.rds.aliyuncs.com,16020,1537434304031
>  Index update failed
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:88)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:87)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.(PhoenixIndexMetaData.java:103)
> at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:95)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:80)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:528)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:374)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1032)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1714)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1746)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1028)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.asyncBatchMutate(HRegion.java:3236)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doAsyncBatchOp(RSRpcServices.java:2147)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchMutationCrossRegions(RSRpcServices.java:2308)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2578)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2394)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:174)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$CallHandler.run(RpcExecutor.java:178)
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached 
> index metadata.  key=-727998515684050837 
> region=LOCK,\x0E,1537434393195.f4de29d4b36775589a49f1c7a20c73a2.host=hb-bp1v2q830426r
> 6763-004.hbase.rds.aliyuncs.com,16020,1537434304031
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:493)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:85)
> 2018-09-20 17:35:39,254 INFO  
> [B.defaultRpcServer.handler=13,queue=1,port=16020] 
> regionserver.RSRpcServices(103): Failed doing multi operation, current call 
> is : callId: 3848 service: ClientService meth
> odName: Multi size: 27.2 K connection: 192.168.199.7:52042 param: 

[jira] [Updated] (PHOENIX-4915) The client gets stuck when using same rows concurrently writing data table

2019-01-02 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-4915:

Priority: Major  (was: Trivial)

> The client gets stuck when using same rows concurrently writing data table
> --
>
> Key: PHOENIX-4915
> URL: https://issues.apache.org/jira/browse/PHOENIX-4915
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0, 4.14.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Attachments: image-2018-09-21-19-30-12-989.png, test.java, test.sql
>
>
> The client has got stuck when using the multi-thread writes the same rows 
> data into a data table which has a global index.
> I find that rows lock of the data table will not be released under highly 
> writing load and throwing " ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to 
> find cached index metadata." exception information. Most of the threads will 
> be waiting for getting the row lock in Jstack information.
> The following are exceptions on the server side:
> {code:java}
> [B.defaultRpcServer.handler=37,queue=1,port=16020] 
> regionserver.RSRpcServices(103): Failed doing multi operation, current call 
> is : callId: 3455 service: ClientService meth
> odName: Multi size: 23.1 K connection: 192.168.199.7:52050 param: 
> actionCount=44#regionCount=8#LOCK,\x02,1537434393195.ee6d441a04ee6a59b24262f22f618d88.#
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-727998515684050837 
> region=LOCK,\x0E,1537434393195.f4de29d4b36775589a49f
> 1c7a20c73a2.host=hb-bp1v2q830426r6763-004.hbase.rds.aliyuncs.com,16020,1537434304031
>  Index update failed
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:88)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:87)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.(PhoenixIndexMetaData.java:103)
> at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:95)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:80)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:528)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:374)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1032)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1714)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1746)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1028)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.asyncBatchMutate(HRegion.java:3236)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doAsyncBatchOp(RSRpcServices.java:2147)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchMutationCrossRegions(RSRpcServices.java:2308)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2578)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2394)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:174)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$CallHandler.run(RpcExecutor.java:178)
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached 
> index metadata.  key=-727998515684050837 
> region=LOCK,\x0E,1537434393195.f4de29d4b36775589a49f1c7a20c73a2.host=hb-bp1v2q830426r
> 6763-004.hbase.rds.aliyuncs.com,16020,1537434304031
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:493)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:85)
> 2018-09-20 17:35:39,254 INFO  
> [B.defaultRpcServer.handler=13,queue=1,port=16020] 
> regionserver.RSRpcServices(103): Failed doing multi operation, current call 
> is : callId: 3848 service: ClientService meth
> odName: Multi size: 27.2 K connection: 192.168.199.7:52042 par

[jira] [Updated] (PHOENIX-4915) The client gets stuck when using same rows concurrently writing data table

2019-01-02 Thread jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaanai updated PHOENIX-4915:

Priority: Trivial  (was: Blocker)

> The client gets stuck when using same rows concurrently writing data table
> --
>
> Key: PHOENIX-4915
> URL: https://issues.apache.org/jira/browse/PHOENIX-4915
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0, 4.14.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Trivial
> Attachments: image-2018-09-21-19-30-12-989.png, test.java, test.sql
>
>
> The client has got stuck when using the multi-thread writes the same rows 
> data into a data table which has a global index.
> I find that rows lock of the data table will not be released under highly 
> writing load and throwing " ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to 
> find cached index metadata." exception information. Most of the threads will 
> be waiting for getting the row lock in Jstack information.
> The following are exceptions on the server side:
> {code:java}
> [B.defaultRpcServer.handler=37,queue=1,port=16020] 
> regionserver.RSRpcServices(103): Failed doing multi operation, current call 
> is : callId: 3455 service: ClientService meth
> odName: Multi size: 23.1 K connection: 192.168.199.7:52050 param: 
> actionCount=44#regionCount=8#LOCK,\x02,1537434393195.ee6d441a04ee6a59b24262f22f618d88.#
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-727998515684050837 
> region=LOCK,\x0E,1537434393195.f4de29d4b36775589a49f
> 1c7a20c73a2.host=hb-bp1v2q830426r6763-004.hbase.rds.aliyuncs.com,16020,1537434304031
>  Index update failed
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:88)
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:87)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.(PhoenixIndexMetaData.java:103)
> at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:95)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:80)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:528)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:374)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1032)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1714)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1746)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1028)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.asyncBatchMutate(HRegion.java:3236)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doAsyncBatchOp(RSRpcServices.java:2147)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchMutationCrossRegions(RSRpcServices.java:2308)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2578)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2394)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:174)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$CallHandler.run(RpcExecutor.java:178)
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached 
> index metadata.  key=-727998515684050837 
> region=LOCK,\x0E,1537434393195.f4de29d4b36775589a49f1c7a20c73a2.host=hb-bp1v2q830426r
> 6763-004.hbase.rds.aliyuncs.com,16020,1537434304031
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:493)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:85)
> 2018-09-20 17:35:39,254 INFO  
> [B.defaultRpcServer.handler=13,queue=1,port=16020] 
> regionserver.RSRpcServices(103): Failed doing multi operation, current call 
> is : callId: 3848 service: ClientService meth
> odName: Multi size: 27.2 K connection: 192.168.199.7:52042

[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Attachment: PHOENIX-5055-4.x-HBase-1.4-v4.patch

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: ConcurrentTest.java, 
> PHOENIX-5055-4.x-HBase-1.4-v2.patch, PHOENIX-5055-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5055-4.x-HBase-1.4-v4.patch, PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
> [^ConcurrentTest.java] produced some random upsert/delete SQL and 
> concurrently executed, some SQL snippets as follows:
> {code:java}
> 1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
> ('3826','2563','3052','3170','3767');
> 1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('2563','4926','3526','678',null,null,'1617');
> 2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('1052','2563','1120','2314','1456',null,null);
> 2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
> ('1922','146',null,'469','2563');
> 2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;
> {code}
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
> Debugged the mutations of batches on the server side. the DeleteColumns and 
> Puts were splitted into the different batches for the once upsert,  the 
> DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
> timestamp is larger than DeleteFaimly under multiple threads.
> !https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Description: 
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some objects and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

 

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。

  was:
We have two methods to write data when uses JDBC API.
 #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
 #2. Uses the _prepareStatement_ method to set some objects and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 

[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Description: 
We have two methods to write data when uses JDBC API.
 #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
 #2. Uses the _prepareStatement_ method to set some objects and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。

  was:
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-

[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Attachment: DateTest.java

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 4.15.0, 5.1
>
> Attachments: DateTest.java
>
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some object and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>   
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5066:

Description: 
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

  

*Uses default timezone test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
Reading the table by the getString methods 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 *Uses GMT+8 test*

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。

  was:
We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

 

 

## Uses default timezone test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.

 
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
 

Reading the table by the getString methods

 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 

## Uses GMT+8 test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000

[jira] [Created] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2018-12-10 Thread Jaanai (JIRA)
Jaanai created PHOENIX-5066:
---

 Summary: The TimeZone is incorrectly used during writing or 
reading data
 Key: PHOENIX-5066
 URL: https://issues.apache.org/jira/browse/PHOENIX-5066
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: Jaanai
Assignee: Jaanai
 Fix For: 4.15.0, 5.1


We have two methods to write data when uses JDBC API.
#1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
#2. Uses the _prepareStatement_ method to set some object and execute.

The _string_ data needs to convert to a new object by the schema information of 
tables. we'll use some date formatters to convert string data to object for 
Date/Time/Timestamp types when writes data and the formatters are used when 
reads data as well.

 

 

## Uses default timezone test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47') 
UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.

 
{code:java}
1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
{code}
 

Reading the table by the getString methods

 
{code:java}
1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 
3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660
{code}
 

 

## Uses GMT+8 test

 Writing 3 records by the different ways.
{code:java}
UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
15:40:47','2018-12-10 15:40:47')

UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
time);stmt.setTimestamp(4, ts);
{code}
Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
{code:java}
1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
Reading the table by the getString methods
{code:java}
 1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000
2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000
3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106
{code}
 

_We_ have a historical problem,  we'll parse the string to Date/Time/Timestamp 
objects with timezone in #1, which means the actual data is going to be changed 
when stored in HBase table。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-07 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Attachment: PHOENIX-5055-4.x-HBase-1.4-v3.patch

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: ConcurrentTest.java, 
> PHOENIX-5055-4.x-HBase-1.4-v2.patch, PHOENIX-5055-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
> [^ConcurrentTest.java] produced some random upsert/delete SQL and 
> concurrently executed, some SQL snippets as follows:
> {code:java}
> 1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
> ('3826','2563','3052','3170','3767');
> 1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('2563','4926','3526','678',null,null,'1617');
> 2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('1052','2563','1120','2314','1456',null,null);
> 2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
> ('1922','146',null,'469','2563');
> 2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;
> {code}
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
> Debugged the mutations of batches on the server side. the DeleteColumns and 
> Puts were splitted into the different batches for the once upsert,  the 
> DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
> timestamp is larger than DeleteFaimly under multiple threads.
> !https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-06 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Attachment: PHOENIX-5055-4.x-HBase-1.4-v2.patch

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: ConcurrentTest.java, 
> PHOENIX-5055-4.x-HBase-1.4-v2.patch, PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
> [^ConcurrentTest.java] produced some random upsert/delete SQL and 
> concurrently executed, some SQL snippets as follows:
> {code:java}
> 1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
> ('3826','2563','3052','3170','3767');
> 1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('2563','4926','3526','678',null,null,'1617');
> 2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('1052','2563','1120','2314','1456',null,null);
> 2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
> ('1922','146',null,'469','2563');
> 2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;
> {code}
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
> Debugged the mutations of batches on the server side. the DeleteColumns and 
> Puts were splitted into the different batches for the once upsert,  the 
> DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
> timestamp is larger than DeleteFaimly under multiple threads.
> !https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-04 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Description: 
In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

[^ConcurrentTest.java] produced some random upsert/delete SQL and concurrently 
executed, some SQL snippets as follows:
{code:java}
1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
('3826','2563','3052','3170','3767');

1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
('2563','4926','3526','678',null,null,'1617');

2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
('1052','2563','1120','2314','1456',null,null);

2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
('1922','146',null,'469','2563');

2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;

{code}
Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

Debugged the mutations of batches on the server side. the DeleteColumns and 
Puts were splitted into the different batches for the once upsert,  the 
DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
timestamp is larger than DeleteFaimly under multiple threads.

!https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 
conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A3','B3', 'C3', null)");
{code}
dump IndexMemStore:
{code:java}
hbase.index.covered.data.IndexMemStore(117): 
Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
Dump ==
{code}
 

The DeleteColumn's timestamp larger than other mutations.

 

  was:
In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

[^ConcurrentTest.java] produced some random upsert/delete SQL and concurrently 
executed, some SQL snippets as follows:

 
{code:java}
1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
('3826','2563','3052','3170','3767');

1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
('2563','4926','3526','678',null,null,'1617');

2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
('1052','2563','1120','2314','1456',null,null);

2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
('1922','146',null,'469','2563');

2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;

{code}
 

Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

Debugged the mutations of batches on the server side. the DeleteColumns and 
Puts were splitted into the different batches for the once upsert,  the 
DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
timestamp is larger than DeleteFaimly under multiple threads.

!https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 

[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-04 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Description: 
In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

[^ConcurrentTest.java] produced some random upsert/delete SQL and concurrently 
executed, some SQL snippets as follows:

 
{code:java}
1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
('3826','2563','3052','3170','3767');

1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
('2563','4926','3526','678',null,null,'1617');

2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
('1052','2563','1120','2314','1456',null,null);

2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
('1922','146',null,'469','2563');

2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;

{code}
 

Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

Debugged the mutations of batches on the server side. the DeleteColumns and 
Puts were splitted into the different batches for the once upsert,  the 
DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
timestamp is larger than DeleteFaimly under multiple threads.

!https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 
conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A3','B3', 'C3', null)");
{code}
dump IndexMemStore:
{code:java}
hbase.index.covered.data.IndexMemStore(117): 
Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
Dump ==
{code}
 

The DeleteColumn's timestamp larger than other mutations.

 

  was:
In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

 Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 
conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A3','B3', 'C3', null)");
{code}
dump IndexMemStore:
{code:java}
hbase.index.covered.data.IndexMemStore(117): 
Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
Dump ==
{code}
 

The DeleteColumn's timestamp larger than other mutations.

 


> Split mutatio

[jira] [Assigned] (PHOENIX-300) Support TRUNCATE TABLE

2018-12-03 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai reassigned PHOENIX-300:
--

Assignee: Jaanai

> Support TRUNCATE TABLE
> --
>
> Key: PHOENIX-300
> URL: https://issues.apache.org/jira/browse/PHOENIX-300
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Jaanai
>
> Though for hbase, it might just be a disable, drop then recreate approaching. 
> While it will be convenient for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-03 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Attachment: PHOENIX-5055-v4.x-HBase-1.4.patch

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: ConcurrentTest.java, PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
>  Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-03 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Attachment: ConcurrentTest.java

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
> Attachments: ConcurrentTest.java
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
>  Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-03 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Description: 
In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

 Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 
conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A3','B3', 'C3', null)");
{code}
dump IndexMemStore:
{code:java}
hbase.index.covered.data.IndexMemStore(117): 
Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
Dump ==
{code}
 

The DeleteColumn's timestamp larger than other mutations.

 

  was:
In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

 

Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 
conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A3','B3', 'C3', null)");
{code}
dump IndexMemStore:
{code:java}
hbase.index.covered.data.IndexMemStore(117): 
Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
Dump ==
{code}
 

The DeleteColumn's timestamp larger than other mutations.

 


> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
>  Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
>  
> Running the following:
> {code:java}
> con

[jira] [Updated] (PHOENIX-5055) Split mutations into multiple batches probably affects correctness of index data

2018-12-03 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Summary: Split mutations into multiple batches probably affects correctness 
of index data  (was: Split the list of mutations into multiple batches probably 
cause correctness of index data)

> Split mutations into multiple batches probably affects correctness of index 
> data
> 
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
>  
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2018-12-03 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-5055:

Summary: Split mutations batches probably affects correctness of index data 
 (was: Split mutations into multiple batches probably affects correctness of 
index data)

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Fix For: 5.1.0
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
>  
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5055) Split the list of mutations into multiple batches probably cause correctness of index data

2018-12-03 Thread Jaanai (JIRA)
Jaanai created PHOENIX-5055:
---

 Summary: Split the list of mutations into multiple batches 
probably cause correctness of index data
 Key: PHOENIX-5055
 URL: https://issues.apache.org/jira/browse/PHOENIX-5055
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: Jaanai
Assignee: Jaanai
 Fix For: 5.1.0


In order to get more performance, we split the list of mutations into multiple 
batches in MutationSate.  For one upsert SQL with some null values that will 
produce two type KeyValues(Put and DeleteColumn),  These KeyValues should have 
the same timestamp so that keep on an atomic operation for corresponding the 
row key.

 

Found incorrect indexed data for the index tables by sqlline.

!https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!

 

Running the following:
{code:java}
conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
COLUMN_ENCODED_BYTES = 0"); 

conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
tableName + " (C) INCLUDE(D)"); 

conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A2','B2','C2','D2')"); 
conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
VALUES ('A3','B3', 'C3', null)");
{code}
dump IndexMemStore:
{code:java}
hbase.index.covered.data.IndexMemStore(117): 
Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
\x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
Dump ==
{code}
 

The DeleteColumn's timestamp larger than other mutations.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-25 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: (was: PHOENIX-4971-4.x-HBase-1.3-v6.patch)

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4971-4.x-HBase-1.4-v6.patch, 
> PHOENIX-4971-4.x-HBase-1.4-v7.patch, PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master-v5-it-fixed.patch, PHOENIX-4971-master-v6.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-25 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: (was: PHOENIX-4971-4.x-HBase-1.2-v6.patch)

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4971-4.x-HBase-1.3-v6.patch, 
> PHOENIX-4971-4.x-HBase-1.4-v6.patch, PHOENIX-4971-4.x-HBase-1.4-v7.patch, 
> PHOENIX-4971-master-v2.patch, PHOENIX-4971-master-v3.patch, 
> PHOENIX-4971-master-v4.patch, PHOENIX-4971-master-v5-it-fixed.patch, 
> PHOENIX-4971-master-v6.patch, PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-24 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: (was: PHOENIX-4971-4.x-HBase-1.4-v6.patch)

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4971-4.x-HBase-1.2-v6.patch, 
> PHOENIX-4971-4.x-HBase-1.3-v6.patch, PHOENIX-4971-4.x-HBase-1.4-v6.patch, 
> PHOENIX-4971-master-v2.patch, PHOENIX-4971-master-v3.patch, 
> PHOENIX-4971-master-v4.patch, PHOENIX-4971-master-v5-it-fixed.patch, 
> PHOENIX-4971-master-v6.patch, PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-24 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: (was: PHOENIX-4971-4.x-HBase-1.1-v6.patch)

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4971-4.x-HBase-1.2-v6.patch, 
> PHOENIX-4971-4.x-HBase-1.3-v6.patch, PHOENIX-4971-4.x-HBase-1.4-v6.patch, 
> PHOENIX-4971-master-v2.patch, PHOENIX-4971-master-v3.patch, 
> PHOENIX-4971-master-v4.patch, PHOENIX-4971-master-v5-it-fixed.patch, 
> PHOENIX-4971-master-v6.patch, PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-23 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: (was: PHOENIX-4971-master-v6.patch)

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master-v5-it-fixed.patch, PHOENIX-4971-master-v6.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-23 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Affects Version/s: 4.14.1
Fix Version/s: 4.15.0

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master-v5-it-fixed.patch, PHOENIX-4971-master-v6.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-23 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: PHOENIX-4971-master-v6.patch

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master-v5-it-fixed.patch, PHOENIX-4971-master-v6.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-11-01 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: PHOENIX-4971-master-v5-it-fixed.patch

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master-v5-it-fixed.patch, PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-10-28 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: (was: PHOENIX-4971-master-v4.patch)

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-10-28 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: PHOENIX-4971-master-v4.patch

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-10-28 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: PHOENIX-4971-master-v4.patch

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master-v4.patch, 
> PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-10-18 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: PHOENIX-4971-master-v3.patch

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Attachments: PHOENIX-4971-master-v2.patch, 
> PHOENIX-4971-master-v3.patch, PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4971) Drop index will execute successfully using Incorrect name of parent tables

2018-10-18 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4971:

Attachment: PHOENIX-4971-master-v2.patch

> Drop index will execute successfully using Incorrect name of parent tables
> --
>
> Key: PHOENIX-4971
> URL: https://issues.apache.org/jira/browse/PHOENIX-4971
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Critical
> Attachments: PHOENIX-4971-master-v2.patch, PHOENIX-4971-master.patch
>
>
> The blew SQL will be executed successfully,  the name of data tables is 
> incorrectly inputted that the parent table has the same name with index 
> tables.
>  
> {code:java}
> DROP INDEX INDEX_TABLE_X on INDEX_TABLE_X;
> {code}
> Some regions will not online and some queries will not normally execute after 
> executing above SQL.  Everything will be ok unless manually delete dirty 
> metadata information in SYSTEM.CATALOG table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Priority: Blocker  (was: Major)

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Blocker
> Attachments: PHOENIX-4974-master.patch, performance.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: (was: performance.png)

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch, performance.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: performance.png

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch, performance.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: performance.png

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch, performance.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: (was: performance.png)

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: performance.png

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch, performance.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: (was: Screen Shot 2018-10-16 at 19.53.48.png)

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: Screen Shot 2018-10-16 at 19.53.48.png

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch, Screen Shot 2018-10-16 at 
> 19.53.48.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4974) Gets all regions uses get requests is extremely slows for big table

2018-10-16 Thread Jaanai (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaanai updated PHOENIX-4974:

Attachment: PHOENIX-4974-master.patch

> Gets all regions uses get requests is extremely slows for big table 
> 
>
> Key: PHOENIX-4974
> URL: https://issues.apache.org/jira/browse/PHOENIX-4974
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jaanai
>Assignee: Jaanai
>Priority: Major
> Attachments: PHOENIX-4974-master.patch, Screen Shot 2018-10-16 at 
> 19.53.48.png
>
>
> When executes the first query after started the client(SQLline or 
> initializing JDBC client ),  needs to load region locations to the client 
> cache.   Now the following is key implement :
> {code:java}
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> {code}
> For some big tables which have more than ten thousand regions,  this 
> procedure is extremely slow. 
> Runs a look points query on the table that has 1 regions after starting 
> the client, it needs 25+ seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >